Modeling the Lighting in Scenes as Style for Auto White-Balance
Correction
- URL: http://arxiv.org/abs/2210.09090v1
- Date: Mon, 17 Oct 2022 13:35:17 GMT
- Title: Modeling the Lighting in Scenes as Style for Auto White-Balance
Correction
- Authors: Furkan K{\i}nl{\i}, Do\u{g}a Y{\i}lmaz, Bar{\i}\c{s} \"Ozcan, Furkan
K{\i}ra\c{c}
- Abstract summary: We introduce an enhanced auto white-balance (AWB) method that models the lighting in single- and mixed-illuminant scenes as the style factor.
Our AWB method does not require any illumination estimation step, yet contains a network learning to generate the weighting maps of the images.
Experiments on single- and mixed-illuminant datasets demonstrate that our proposed method achieves promising correction results.
- Score: 3.441021278275805
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Style may refer to different concepts (e.g. painting style, hairstyle,
texture, color, filter, etc.) depending on how the feature space is formed. In
this work, we propose a novel idea of interpreting the lighting in the single-
and multi-illuminant scenes as the concept of style. To verify this idea, we
introduce an enhanced auto white-balance (AWB) method that models the lighting
in single- and mixed-illuminant scenes as the style factor. Our AWB method does
not require any illumination estimation step, yet contains a network learning
to generate the weighting maps of the images with different WB settings.
Proposed network utilizes the style information, extracted from the scene by a
multi-head style extraction module. AWB correction is completed after blending
these weighting maps and the scene. Experiments on single- and mixed-illuminant
datasets demonstrate that our proposed method achieves promising correction
results when compared to the recent works. This shows that the lighting in the
scenes with multiple illuminations can be modeled by the concept of style.
Source code and trained models are available on
https://github.com/birdortyedi/lighting-as-style-awb-correction.
Related papers
- LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - Attentive Illumination Decomposition Model for Multi-Illuminant White
Balancing [27.950125640986805]
White balance (WB) algorithms in many commercial cameras assume single and uniform illumination.
We present a deep white balancing model that leverages the slot attention, where each slot is in charge of representing individual illuminants.
This design enables the model to generate chromaticities and weight maps for individual illuminants, which are then fused to compose the final illumination map.
arXiv Detail & Related papers (2024-02-28T12:15:29Z) - Relightful Harmonization: Lighting-aware Portrait Background Replacement [23.19641174787912]
We introduce Relightful Harmonization, a lighting-aware diffusion model designed to seamlessly harmonize sophisticated lighting effect for the foreground portrait using any background image.
Our approach unfolds in three stages. First, we introduce a lighting representation module that allows our diffusion model to encode lighting information from target image background.
Second, we introduce an alignment network that aligns lighting features learned from image background with lighting features learned from panorama environment maps.
arXiv Detail & Related papers (2023-12-11T23:20:31Z) - StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions [1.933681537640272]
We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data.
Our approach generates images with realistic lighting effects, including cast shadows, soft shadows, inter-reflections, and glossy effects, without the need for paired or CGI data.
arXiv Detail & Related papers (2022-05-20T17:59:40Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - Auto White-Balance Correction for Mixed-Illuminant Scenes [52.641704254001844]
Auto white balance (AWB) is applied by camera hardware to remove color cast caused by scene illumination.
This paper presents an effective AWB method to deal with such mixed-illuminant scenes.
Our method does not require illuminant estimation, as is the case in traditional camera AWB modules.
arXiv Detail & Related papers (2021-09-17T20:13:31Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - Neural Light Transport for Relighting and View Synthesis [70.39907425114302]
Light transport (LT) of a scene describes how it appears under different lighting and viewing directions.
We propose a semi-parametric approach to learn a neural representation of LT embedded in a texture atlas of known geometric properties.
We show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition.
arXiv Detail & Related papers (2020-08-09T20:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.