Time-Aware Auto White Balance in Mobile Photography
- URL: http://arxiv.org/abs/2504.05623v1
- Date: Tue, 08 Apr 2025 02:45:37 GMT
- Title: Time-Aware Auto White Balance in Mobile Photography
- Authors: Mahmoud Afifi, Luxi Zhao, Abhijith Punnappurath, Mohammed A. Abdelsalam, Ran Zhang, Michael S. Brown,
- Abstract summary: We introduce a dataset of 3,224 smartphone images with contextual metadata collected at various times of day and under diverse lighting conditions.<n>The dataset includes ground-truth illuminant colors, determined using a color chart, and user-preferred illuminants validated through a user study.
- Score: 32.74472735591053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cameras rely on auto white balance (AWB) to correct undesirable color casts caused by scene illumination and the camera's spectral sensitivity. This is typically achieved using an illuminant estimator that determines the global color cast solely from the color information in the camera's raw sensor image. Mobile devices provide valuable additional metadata-such as capture timestamp and geolocation-that offers strong contextual clues to help narrow down the possible illumination solutions. This paper proposes a lightweight illuminant estimation method that incorporates such contextual metadata, along with additional capture information and image colors, into a compact model (~5K parameters), achieving promising results, matching or surpassing larger models. To validate our method, we introduce a dataset of 3,224 smartphone images with contextual metadata collected at various times of day and under diverse lighting conditions. The dataset includes ground-truth illuminant colors, determined using a color chart, and user-preferred illuminants validated through a user study, providing a comprehensive benchmark for AWB evaluation.
Related papers
- SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events [53.79905461386883]
Event cameras, with a high dynamic range exceeding $120dB$, significantly outperform traditional embedded cameras.<n>We propose a novel research question: how to employ events to enhance and adaptively adjust the brightness of images captured under broad lighting conditions.<n>Our framework captures color through sensor patterns, uses cross-attention to model events as a brightness dictionary, and adjusts the image's dynamic range to form a broad light-range representation.
arXiv Detail & Related papers (2025-02-28T14:55:37Z) - Attentive Illumination Decomposition Model for Multi-Illuminant White
Balancing [27.950125640986805]
White balance (WB) algorithms in many commercial cameras assume single and uniform illumination.
We present a deep white balancing model that leverages the slot attention, where each slot is in charge of representing individual illuminants.
This design enables the model to generate chromaticities and weight maps for individual illuminants, which are then fused to compose the final illumination map.
arXiv Detail & Related papers (2024-02-28T12:15:29Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance
and Color Prediction [0.7456526005219319]
Laval Photometric Indoor HDR dataset is the first large-scale photometrically calibrated dataset of high dynamic range 360deg panoramas.
We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device.
The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color, and varied types of light sources.
arXiv Detail & Related papers (2023-04-24T18:10:25Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Transform your Smartphone into a DSLR Camera: Learning the ISP in the
Wild [159.71025525493354]
We propose a trainable Image Signal Processing framework that produces DSLR quality images given RAW images captured by a smartphone.
To address the color misalignments between training image pairs, we employ a color-conditional ISP network and optimize a novel parametric color mapping between each input RAW and reference DSLR image.
arXiv Detail & Related papers (2022-03-20T20:13:59Z) - Auto White-Balance Correction for Mixed-Illuminant Scenes [52.641704254001844]
Auto white balance (AWB) is applied by camera hardware to remove color cast caused by scene illumination.
This paper presents an effective AWB method to deal with such mixed-illuminant scenes.
Our method does not require illuminant estimation, as is the case in traditional camera AWB modules.
arXiv Detail & Related papers (2021-09-17T20:13:31Z) - Semi-Supervised Raw-to-Raw Mapping [19.783856963405754]
The raw-RGB colors of a camera sensor vary due to the spectral sensitivity differences across different sensor makes and models.
We present a semi-supervised raw-to-raw mapping method trained on a small set of paired images alongside an unpaired set of images captured by each camera device.
arXiv Detail & Related papers (2021-06-25T21:01:45Z) - Illumination Estimation Challenge: experience of past two years [57.13714732760851]
The 2nd Illumination estimation challenge( IEC#2) was conducted.
The challenge had several tracks: general, indoor, and two-illuminant with each of them focusing on different parameters of the scenes.
Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest-like markup for the images from the Cube+ dataset that was used in IEC#1.
arXiv Detail & Related papers (2020-12-31T17:59:19Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z) - A Multi-Hypothesis Approach to Color Constancy [22.35581217222978]
Current approaches frame the color constancy problem as learning camera specific illuminant mappings.
We propose a Bayesian framework that naturally handles color constancy ambiguity via a multi-hypothesis strategy.
Our method provides state-of-the-art accuracy on multiple public datasets while maintaining real-time execution.
arXiv Detail & Related papers (2020-02-28T18:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.