Unifying Color and Lightness Correction with View-Adaptive Curve Adjustment for Robust 3D Novel View Synthesis
- URL: http://arxiv.org/abs/2602.18322v1
- Date: Fri, 20 Feb 2026 16:20:50 GMT
- Title: Unifying Color and Lightness Correction with View-Adaptive Curve Adjustment for Robust 3D Novel View Synthesis
- Authors: Ziteng Cui, Shuhong Liu, Xiaoyu Dong, Xuangeng Chu, Lin Gu, Ming-Hsuan Yang, Tatsuya Harada,
- Abstract summary: We propose Luminance-GS++, a 3DGS-based framework for robust NVS under diverse illumination conditions.<n>Our method combines a globally view-adaptive lightness adjustment with a local pixel-wise residual refinement for precise color correction.
- Score: 73.27997579020233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-quality image acquisition in real-world environments remains challenging due to complex illumination variations and inherent limitations of camera imaging pipelines. These issues are exacerbated in multi-view capture, where differences in lighting, sensor responses, and image signal processor (ISP) configurations introduce photometric and chromatic inconsistencies that violate the assumptions of photometric consistency underlying modern 3D novel view synthesis (NVS) methods, including Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), leading to degraded reconstruction and rendering quality. We propose Luminance-GS++, a 3DGS-based framework for robust NVS under diverse illumination conditions. Our method combines a globally view-adaptive lightness adjustment with a local pixel-wise residual refinement for precise color correction. We further design unsupervised objectives that jointly enforce lightness correction and multi-view geometric and photometric consistency. Extensive experiments demonstrate state-of-the-art performance across challenging scenarios, including low-light, overexposure, and complex luminance and chromatic variations. Unlike prior approaches that modify the underlying representation, our method preserves the explicit 3DGS formulation, improving reconstruction fidelity while maintaining real-time rendering efficiency.
Related papers
- Beyond Darkness: Thermal-Supervised 3D Gaussian Splatting for Low-Light Novel View Synthesis [15.288134634021139]
Under extremely low-light conditions, novel view synthesis (NVS) faces severe degradation in terms of geometry, color consistency, and radiometric stability.<n>We present DTGS, a unified framework that tightly couples Retinex-inspired illumination decomposition with thermal-guided 3D Gaussian Splatting.
arXiv Detail & Related papers (2025-11-17T06:12:53Z) - Moving Light Adaptive Colonoscopy Reconstruction via Illumination-Attenuation-Aware 3D Gaussian Splatting [35.37461816543526]
3D Gaussian Splatting (3DGS) has emerged as a pivotal technique for real-time view synthesis in colonoscopy.<n>However, the vanilla 3DGS assumes static illumination and that observed appearance depends solely on viewing angle.<n>This mismatch forces most 3DGS methods to introduce structure-violating vaporous Gaussian blobs between the camera and tissues.<n>We propose ColIAGS, an improved 3DGS framework tailored for colonoscopy.
arXiv Detail & Related papers (2025-10-21T15:44:23Z) - LightQANet: Quantized and Adaptive Feature Learning for Low-Light Image Enhancement [65.06462316546806]
Low-light image enhancement aims to improve illumination while preserving high-quality color and texture.<n>Existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions.<n>We propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement.
arXiv Detail & Related papers (2025-10-16T14:54:42Z) - Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark [51.68144172958247]
We propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints.<n>Our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions.
arXiv Detail & Related papers (2025-07-16T05:54:33Z) - Generalizable and Relightable Gaussian Splatting for Human Novel View Synthesis [49.67420486373202]
GRGS is a generalizable and relightable 3D Gaussian framework for high-fidelity human novel view synthesis under diverse lighting conditions.<n>We introduce a Lighting-aware Geometry Refinement (LGR) module trained on synthetically relit data to predict accurate depth and surface normals.
arXiv Detail & Related papers (2025-05-27T17:59:47Z) - Luminance-GS: Adapting 3D Gaussian Splatting to Challenging Lighting Conditions with View-Adaptive Curve Adjustment [46.60106452798745]
We introduce Luminance-GS, a novel approach to achieving high-quality novel view synthesis results under challenging lighting conditions using 3DGS.<n>By adopting per-view color matrix mapping and view-adaptive curve adjustments, Luminance-GS achieves state-of-the-art (SOTA) results across various lighting conditions.<n>Compared to previous NeRF- and 3DGS-based baselines, Luminance-GS provides real-time rendering speed with improved reconstruction quality.
arXiv Detail & Related papers (2025-04-02T08:54:57Z) - GS-I$^{3}$: Gaussian Splatting for Surface Reconstruction from Illumination-Inconsistent Images [6.055104738156625]
3D Gaussian Splatting (3DGS) has gained significant attention in the field of surface reconstruction.<n>We propose a method called GS-3I to address the challenge of robust surface reconstruction under inconsistent illumination.<n>We show that GS-3I can achieve robust and accurate surface reconstruction across complex illumination scenarios.
arXiv Detail & Related papers (2025-03-16T03:08:54Z) - D3DR: Lighting-Aware Object Insertion in Gaussian Splatting [48.80431740983095]
We propose a method, dubbed D3DR, for inserting a 3DGS-parametrized object into 3DGS scenes.<n>We leverage advances in diffusion models, which, trained on real-world data, implicitly understand correct scene lighting.<n>We demonstrate the method's effectiveness by comparing it to existing approaches.
arXiv Detail & Related papers (2025-03-09T19:48:00Z) - PEP-GS: Perceptually-Enhanced Precise Structured 3D Gaussians for View-Adaptive Rendering [3.1006820631993515]
3D Gaussian Splatting (3D-GS) has achieved significant success in real-time, high-quality 3D scene rendering.<n>We introduce PEP-GS, a perceptually-enhanced framework that dynamically predicts Gaussian attributes, including opacity, color, and covariance.<n>We show that PEP-GS outperforms state-of-the-art methods, particularly in challenging scenarios involving view-dependent effects and fine-scale details.
arXiv Detail & Related papers (2024-11-08T17:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.