Generative Models for Multi-Illumination Color Constancy
- URL: http://arxiv.org/abs/2109.00863v1
- Date: Thu, 2 Sep 2021 12:24:40 GMT
- Title: Generative Models for Multi-Illumination Color Constancy
- Authors: Partha Das, Yang Liu, Sezer Karaoglu and Theo Gevers
- Abstract summary: We propose a seed (physics driven) based multi-illumination color constancy method.
GANs are exploited to model the illumination estimation problem as an image-to-image domain translation problem.
Experiments on single and multi-illumination datasets show that our methods outperform sota methods.
- Score: 23.511249515559122
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, the aim is multi-illumination color constancy. However, most
of the existing color constancy methods are designed for single light sources.
Furthermore, datasets for learning multiple illumination color constancy are
largely missing. We propose a seed (physics driven) based multi-illumination
color constancy method. GANs are exploited to model the illumination estimation
problem as an image-to-image domain translation problem. Additionally, a novel
multi-illumination data augmentation method is proposed. Experiments on single
and multi-illumination datasets show that our methods outperform sota methods.
Related papers
- Diff-Mosaic: Augmenting Realistic Representations in Infrared Small Target Detection via Diffusion Prior [63.64088590653005]
We propose Diff-Mosaic, a data augmentation method based on the diffusion model.
We introduce an enhancement network called Pixel-Prior, which generates highly coordinated and realistic Mosaic images.
In the second stage, we propose an image enhancement strategy named Diff-Prior. This strategy utilizes diffusion priors to model images in the real-world scene.
arXiv Detail & Related papers (2024-06-02T06:23:05Z) - LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - Attentive Illumination Decomposition Model for Multi-Illuminant White
Balancing [27.950125640986805]
White balance (WB) algorithms in many commercial cameras assume single and uniform illumination.
We present a deep white balancing model that leverages the slot attention, where each slot is in charge of representing individual illuminants.
This design enables the model to generate chromaticities and weight maps for individual illuminants, which are then fused to compose the final illumination map.
arXiv Detail & Related papers (2024-02-28T12:15:29Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes [16.176896461798993]
We propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.
The proposed method enforces smoothness within neighboring pixels, by regularizing the training with the total variation loss.
A bilateral filter is provisioned further to enhance the natural appearance of the estimated images, while preserving the edges.
arXiv Detail & Related papers (2024-02-05T11:42:19Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - MIMT: Multi-Illuminant Color Constancy via Multi-Task Local Surface and
Light Color Learning [42.72878256074646]
We introduce a multi-task learning method to discount multiple light colors in a single input image.
To have better cues of the local surface/light colors under multiple light color conditions, we design a novel multi-task learning framework.
Our model achieves 47.1% improvement compared to a state-of-the-art multi-illuminant color constancy method on a multi-illuminant dataset.
arXiv Detail & Related papers (2022-11-16T09:00:20Z) - Revisiting and Optimising a CNN Colour Constancy Method for
Multi-Illuminant Estimation [0.76146285961466]
The aim of colour constancy is to discount the effect of the scene illumination from the image colours and restore the colours of the objects as captured under a 'white' illuminant.
We present in this paper a simple yet very effective framework using a deep CNN-based method to estimate and use multiple illuminants for colour constancy.
arXiv Detail & Related papers (2022-11-03T16:33:56Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.