MIMT: Multi-Illuminant Color Constancy via Multi-Task Local Surface and
Light Color Learning
- URL: http://arxiv.org/abs/2211.08772v3
- Date: Tue, 22 Aug 2023 19:45:17 GMT
- Title: MIMT: Multi-Illuminant Color Constancy via Multi-Task Local Surface and
Light Color Learning
- Authors: Shuwei Li, Jikai Wang, Michael S. Brown, Robby T. Tan
- Abstract summary: We introduce a multi-task learning method to discount multiple light colors in a single input image.
To have better cues of the local surface/light colors under multiple light color conditions, we design a novel multi-task learning framework.
Our model achieves 47.1% improvement compared to a state-of-the-art multi-illuminant color constancy method on a multi-illuminant dataset.
- Score: 42.72878256074646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The assumption of a uniform light color distribution is no longer applicable
in scenes that have multiple light colors. Most color constancy methods are
designed to deal with a single light color, and thus are erroneous when applied
to multiple light colors. The spatial variability in multiple light colors
causes the color constancy problem to be more challenging and requires the
extraction of local surface/light information. Motivated by this, we introduce
a multi-task learning method to discount multiple light colors in a single
input image. To have better cues of the local surface/light colors under
multiple light color conditions, we design a novel multi-task learning
framework. Our framework includes auxiliary tasks of achromatic-pixel detection
and surface-color similarity prediction, providing better cues for local light
and surface colors, respectively. Moreover, to ensure that our model maintains
the constancy of surface colors regardless of the variations of light colors, a
novel local surface color feature preservation scheme is developed. We
demonstrate that our model achieves 47.1% improvement (from 4.69 mean angular
error to 2.48) compared to a state-of-the-art multi-illuminant color constancy
method on a multi-illuminant dataset (LSMI).
Related papers
- MultiColor: Image Colorization by Learning from Multiple Color Spaces [4.738828630428634]
MultiColor is a new learning-based approach to automatically colorize grayscale images.
We employ a set of dedicated colorization modules for individual color space.
With these predicted color channels representing various color spaces, a complementary network is designed to exploit the complementarity and generate pleasing and reasonable colorized images.
arXiv Detail & Related papers (2024-08-08T02:34:41Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes [16.176896461798993]
We propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources.
The proposed method enforces smoothness within neighboring pixels, by regularizing the training with the total variation loss.
A bilateral filter is provisioned further to enhance the natural appearance of the estimated images, while preserving the edges.
arXiv Detail & Related papers (2024-02-05T11:42:19Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Generative Models for Multi-Illumination Color Constancy [23.511249515559122]
We propose a seed (physics driven) based multi-illumination color constancy method.
GANs are exploited to model the illumination estimation problem as an image-to-image domain translation problem.
Experiments on single and multi-illumination datasets show that our methods outperform sota methods.
arXiv Detail & Related papers (2021-09-02T12:24:40Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Shed Various Lights on a Low-Light Image: Multi-Level Enhancement Guided
by Arbitrary References [17.59529931863947]
This paper proposes a neural network for multi-level low-light image enhancement.
Inspired by style transfer, our method decomposes an image into two low-coupling feature components in the latent space.
In such a way, the network learns to extract scene-invariant and brightness-specific information from a set of image pairs.
arXiv Detail & Related papers (2021-01-04T07:38:51Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.