Attentive Illumination Decomposition Model for Multi-Illuminant White
Balancing
- URL: http://arxiv.org/abs/2402.18277v1
- Date: Wed, 28 Feb 2024 12:15:29 GMT
- Title: Attentive Illumination Decomposition Model for Multi-Illuminant White
Balancing
- Authors: Dongyoung Kim, Jinwoo Kim, Junsang Yu, Seon Joo Kim
- Abstract summary: White balance (WB) algorithms in many commercial cameras assume single and uniform illumination.
We present a deep white balancing model that leverages the slot attention, where each slot is in charge of representing individual illuminants.
This design enables the model to generate chromaticities and weight maps for individual illuminants, which are then fused to compose the final illumination map.
- Score: 27.950125640986805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: White balance (WB) algorithms in many commercial cameras assume single and
uniform illumination, leading to undesirable results when multiple lighting
sources with different chromaticities exist in the scene. Prior research on
multi-illuminant WB typically predicts illumination at the pixel level without
fully grasping the scene's actual lighting conditions, including the number and
color of light sources. This often results in unnatural outcomes lacking in
overall consistency. To handle this problem, we present a deep white balancing
model that leverages the slot attention, where each slot is in charge of
representing individual illuminants. This design enables the model to generate
chromaticities and weight maps for individual illuminants, which are then fused
to compose the final illumination map. Furthermore, we propose the
centroid-matching loss, which regulates the activation of each slot based on
the color range, thereby enhancing the model to separate illumination more
effectively. Our method achieves the state-of-the-art performance on both
single- and multi-illuminant WB benchmarks, and also offers additional
information such as the number of illuminants in the scene and their
chromaticity. This capability allows for illumination editing, an application
not feasible with prior methods.
Related papers
- DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - Colorful Diffuse Intrinsic Image Decomposition in the Wild [0.0]
Intrinsic image decomposition aims to separate the surface reflectance and the effects from the illumination given a single photograph.
In this work, we separate an input image into its diffuse albedo, colorful diffuse shading, and specular residual components.
Our extended intrinsic model enables illumination-aware analysis of photographs and can be used for image editing applications.
arXiv Detail & Related papers (2024-09-20T17:59:40Z) - LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - URHand: Universal Relightable Hands [64.25893653236912]
We present URHand, the first universal relightable hand model that generalizes across viewpoints, poses, illuminations, and identities.
Our model allows few-shot personalization using images captured with a mobile phone, and is ready to be photorealistically rendered under novel illuminations.
arXiv Detail & Related papers (2024-01-10T18:59:51Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Modeling the Lighting in Scenes as Style for Auto White-Balance
Correction [3.441021278275805]
We introduce an enhanced auto white-balance (AWB) method that models the lighting in single- and mixed-illuminant scenes as the style factor.
Our AWB method does not require any illumination estimation step, yet contains a network learning to generate the weighting maps of the images.
Experiments on single- and mixed-illuminant datasets demonstrate that our proposed method achieves promising correction results.
arXiv Detail & Related papers (2022-10-17T13:35:17Z) - Template matching with white balance adjustment under multiple
illuminants [17.134566958534634]
We propose a novel template matching method with a white balancing adjustment, called N-white balancing, which was proposed for multi-illuminant scenes.
In experiments, the effectiveness of the proposed method is demonstrated to be effective in object detection tasks under various illumination conditions.
arXiv Detail & Related papers (2022-08-03T12:57:18Z) - Auto White-Balance Correction for Mixed-Illuminant Scenes [52.641704254001844]
Auto white balance (AWB) is applied by camera hardware to remove color cast caused by scene illumination.
This paper presents an effective AWB method to deal with such mixed-illuminant scenes.
Our method does not require illuminant estimation, as is the case in traditional camera AWB modules.
arXiv Detail & Related papers (2021-09-17T20:13:31Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.