Improving the color accuracy of lighting estimation models
- URL: http://arxiv.org/abs/2509.18390v1
- Date: Mon, 22 Sep 2025 20:23:33 GMT
- Title: Improving the color accuracy of lighting estimation models
- Authors: Zitian Zhang, Joshua Urban Davis, Jeanne Phuong Anh Vu, Jiangtao Kuang, Jean-François Lalonde,
- Abstract summary: We investigate the color robustness of lighting estimation methods.<n>We find that preprocessing the input image with a pre-trained white balance network improves color robustness.
- Score: 11.218484596935895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in high dynamic range (HDR) lighting estimation from a single image have opened new possibilities for augmented reality (AR) applications. Predicting complex lighting environments from a single input image allows for the realistic rendering and compositing of virtual objects. In this work, we investigate the color robustness of such methods -- an often overlooked yet critical factor for achieving visual realism. While most evaluations conflate color with other lighting attributes (e.g., intensity, direction), we isolate color as the primary variable of interest. Rather than introducing a new lighting estimation algorithm, we explore whether simple adaptation techniques can enhance the color accuracy of existing models. Using a novel HDR dataset featuring diverse lighting colors, we systematically evaluate several adaptation strategies. Our results show that preprocessing the input image with a pre-trained white balance network improves color robustness, outperforming other strategies across all tested scenarios. Notably, this approach requires no retraining of the lighting estimation model. We further validate the generality of this finding by applying the technique to three state-of-the-art lighting estimation methods from recent literature.
Related papers
- Unifying Color and Lightness Correction with View-Adaptive Curve Adjustment for Robust 3D Novel View Synthesis [73.27997579020233]
We propose Luminance-GS++, a 3DGS-based framework for robust NVS under diverse illumination conditions.<n>Our method combines a globally view-adaptive lightness adjustment with a local pixel-wise residual refinement for precise color correction.
arXiv Detail & Related papers (2026-02-20T16:20:50Z) - LightQANet: Quantized and Adaptive Feature Learning for Low-Light Image Enhancement [65.06462316546806]
Low-light image enhancement aims to improve illumination while preserving high-quality color and texture.<n>Existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions.<n>We propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement.
arXiv Detail & Related papers (2025-10-16T14:54:42Z) - LuxDiT: Lighting Estimation with Video Diffusion Transformer [66.60450792095901]
Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics.<n>We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input.
arXiv Detail & Related papers (2025-09-03T19:59:20Z) - After the Party: Navigating the Mapping From Color to Ambient Lighting [48.01497878412971]
We introduce CL3AN, the first large-scale, high-resolution dataset of its kind.<n>We find that leading approaches often produce artifacts, such as illumination inconsistencies, texture leakage, and color distortion.<n>We achieve such a desired decomposition through a novel learning framework.
arXiv Detail & Related papers (2025-08-04T08:07:03Z) - Illuminant and light direction estimation using Wasserstein distance method [0.0]
This study introduces a novel method utilizing the Wasserstein distance to estimate illuminant and light direction in images.<n>Experiments on diverse images demonstrate the method's efficacy in detecting dominant light sources and estimating their directions.<n>The approach shows promise for applications in light source localization, image quality assessment, and object detection enhancement.
arXiv Detail & Related papers (2025-03-03T19:20:09Z) - Bright-NeRF:Brightening Neural Radiance Field with Color Restoration from Low-light Raw Images [8.679462472714942]
We propose a novel approach, Bright-NeRF, which learns enhanced and high-quality radiance fields from low-light raw images in an unsupervised manner.<n>Our method simultaneously achieves color restoration, denoising, and enhanced novel view synthesis.
arXiv Detail & Related papers (2024-12-19T05:55:18Z) - Zero-Shot Low Light Image Enhancement with Diffusion Prior [2.102429358229889]
"Free lunch" solution for low-light image enhancement (LLIE)<n>We leverage a pre-trained text-to-image diffusion prior, learned from training on a large collection of natural images, and the features present in the model itself to guide the inference.
arXiv Detail & Related papers (2024-12-18T00:31:18Z) - SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry,
Illumination, and Material Estimation [65.99344783327054]
We present a novel approach for digitizing real-world objects by estimating their geometry, material properties, and lighting.
Our method incorporates into Radiance Neural Field (NeRF) pipelines the split sum approximation used with image-based lighting for real-time physical-based rendering.
Our method is capable of attaining state-of-the-art relighting quality after only $sim1$ hour of training in a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2023-11-28T10:36:36Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Sparse Needlets for Lighting Estimation with Spherical Transport Loss [89.52531416604774]
NeedleLight is a new lighting estimation model that represents illumination with needlets and allows lighting estimation in both frequency domain and spatial domain jointly.
Extensive experiments show that NeedleLight achieves superior lighting estimation consistently across multiple evaluation metrics as compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-06-24T15:19:42Z) - Light Direction and Color Estimation from Single Image with Deep
Regression [25.45529007045549]
We present a method to estimate the direction and color of the scene light source from a single image.
Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source.
arXiv Detail & Related papers (2020-09-18T17:33:49Z) - A Multi-Hypothesis Approach to Color Constancy [22.35581217222978]
Current approaches frame the color constancy problem as learning camera specific illuminant mappings.
We propose a Bayesian framework that naturally handles color constancy ambiguity via a multi-hypothesis strategy.
Our method provides state-of-the-art accuracy on multiple public datasets while maintaining real-time execution.
arXiv Detail & Related papers (2020-02-28T18:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.