Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes
- URL: http://arxiv.org/abs/2402.02922v1
- Date: Mon, 5 Feb 2024 11:42:19 GMT
- Title: Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes
- Authors: Umut Cem Entok, Firas Laakom, Farhad Pakdaman, Moncef Gabbouj
- Abstract summary: We propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.
The proposed method enforces smoothness within neighboring pixels, by regularizing the training with the total variation loss.
A bilateral filter is provisioned further to enhance the natural appearance of the estimated images, while preserving the edges.
- Score: 16.176896461798993
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most scenes are illuminated by several light sources, where the traditional
assumption of uniform illumination is invalid. This issue is ignored in most
color constancy methods, primarily due to the complex spatial impact of
multiple light sources on the image. Moreover, most existing multi-illuminant
methods fail to preserve the smooth change of illumination, which stems from
spatial dependencies in natural images. Motivated by this, we propose a novel
multi-illuminant color constancy method, by learning pixel-wise illumination
maps caused by multiple light sources. The proposed method enforces smoothness
within neighboring pixels, by regularizing the training with the total
variation loss. Moreover, a bilateral filter is provisioned further to enhance
the natural appearance of the estimated images, while preserving the edges.
Additionally, we propose a label-smoothing technique that enables the model to
generalize well despite the uncertainties in ground truth. Quantitative and
qualitative experiments demonstrate that the proposed method outperforms the
state-of-the-art.
Related papers
- Dual High-Order Total Variation Model for Underwater Image Restoration [13.789310785350484]
Underwater image enhancement and restoration (UIER) is one crucial mode to improve the visual quality of underwater images.
We propose an effective variational framework based on an extended underwater image formation model (UIFM)
In our proposed framework, the weight factors-based color compensation is combined with the color balance to compensate for the attenuated color channels and remove the color cast.
arXiv Detail & Related papers (2024-07-20T13:06:37Z) - NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering [63.24476194987721]
Inverse rendering, the process of inferring scene properties from images, is a challenging inverse problem.
Most existing solutions incorporate priors into the inverse-rendering pipeline to encourage plausible solutions.
We propose a novel scheme that integrates a denoising probabilistic diffusion model pre-trained on natural illumination maps into an optimization framework.
arXiv Detail & Related papers (2023-09-30T12:39:28Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Generative Models for Multi-Illumination Color Constancy [23.511249515559122]
We propose a seed (physics driven) based multi-illumination color constancy method.
GANs are exploited to model the illumination estimation problem as an image-to-image domain translation problem.
Experiments on single and multi-illumination datasets show that our methods outperform sota methods.
arXiv Detail & Related papers (2021-09-02T12:24:40Z) - Intrinsic Image Transfer for Illumination Manipulation [1.2387676601792899]
This paper presents a novel intrinsic image transfer (IIT) algorithm for illumination manipulation.
It creates a local image translation between two illumination surfaces.
We illustrate that all losses can be reduced without the necessity of taking an intrinsic image decomposition.
arXiv Detail & Related papers (2021-07-01T19:12:24Z) - Shed Various Lights on a Low-Light Image: Multi-Level Enhancement Guided
by Arbitrary References [17.59529931863947]
This paper proposes a neural network for multi-level low-light image enhancement.
Inspired by style transfer, our method decomposes an image into two low-coupling feature components in the latent space.
In such a way, the network learns to extract scene-invariant and brightness-specific information from a set of image pairs.
arXiv Detail & Related papers (2021-01-04T07:38:51Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.