Luminance-GS: Adapting 3D Gaussian Splatting to Challenging Lighting Conditions with View-Adaptive Curve Adjustment
- URL: http://arxiv.org/abs/2504.01503v2
- Date: Wed, 23 Apr 2025 15:06:14 GMT
- Title: Luminance-GS: Adapting 3D Gaussian Splatting to Challenging Lighting Conditions with View-Adaptive Curve Adjustment
- Authors: Ziteng Cui, Xuangeng Chu, Tatsuya Harada,
- Abstract summary: We introduce Luminance-GS, a novel approach to achieving high-quality novel view synthesis results under challenging lighting conditions using 3DGS.<n>By adopting per-view color matrix mapping and view-adaptive curve adjustments, Luminance-GS achieves state-of-the-art (SOTA) results across various lighting conditions.<n>Compared to previous NeRF- and 3DGS-based baselines, Luminance-GS provides real-time rendering speed with improved reconstruction quality.
- Score: 46.60106452798745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Capturing high-quality photographs under diverse real-world lighting conditions is challenging, as both natural lighting (e.g., low-light) and camera exposure settings (e.g., exposure time) significantly impact image quality. This challenge becomes more pronounced in multi-view scenarios, where variations in lighting and image signal processor (ISP) settings across viewpoints introduce photometric inconsistencies. Such lighting degradations and view-dependent variations pose substantial challenges to novel view synthesis (NVS) frameworks based on Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS). To address this, we introduce Luminance-GS, a novel approach to achieving high-quality novel view synthesis results under diverse challenging lighting conditions using 3DGS. By adopting per-view color matrix mapping and view-adaptive curve adjustments, Luminance-GS achieves state-of-the-art (SOTA) results across various lighting conditions -- including low-light, overexposure, and varying exposure -- while not altering the original 3DGS explicit representation. Compared to previous NeRF- and 3DGS-based baselines, Luminance-GS provides real-time rendering speed with improved reconstruction quality.
Related papers
- EBAD-Gaussian: Event-driven Bundle Adjusted Deblur Gaussian Splatting [21.46091843175779]
Event-driven Bundle Adjusted Deblur Gaussian Splatting (EBAD-Gaussian)
EBAD-Gaussian reconstructs sharp 3D Gaussians from event streams and severely blurred images.
Experiments on synthetic and real-world datasets show that EBAD-Gaussian can achieve high-quality 3D scene reconstruction.
arXiv Detail & Related papers (2025-04-14T09:17:00Z) - LITA-GS: Illumination-Agnostic Novel View Synthesis via Reference-Free 3D Gaussian Splatting and Physical Priors [14.160637565120231]
LITA-GS is a novel illumination-agnostic novel view synthesis method via reference-free 3DGS and physical priors.<n>We develop the lighting-agnostic structure rendering strategy, which facilitates the optimization of the scene structure and object appearance.<n>We adopt the unsupervised strategy for the training of LITA-GS and extensive experiments demonstrate that LITA-GS surpasses the state-of-the-art (SOTA) NeRF-based method.
arXiv Detail & Related papers (2025-03-31T20:44:39Z) - GS-I$^{3}$: Gaussian Splatting for Surface Reconstruction from Illumination-Inconsistent Images [6.055104738156626]
3D Gaussian Splatting (3DGS) has gained significant attention in the field of surface reconstruction.<n>We propose a method called GS-3I to address the challenge of robust surface reconstruction under inconsistent illumination.<n>We show that GS-3I can achieve robust and accurate surface reconstruction across complex illumination scenarios.
arXiv Detail & Related papers (2025-03-16T03:08:54Z) - D3DR: Lighting-Aware Object Insertion in Gaussian Splatting [48.80431740983095]
We propose a method, dubbed D3DR, for inserting a 3DGS-parametrized object into 3DGS scenes.
We leverage advances in diffusion models, which, trained on real-world data, implicitly understand correct scene lighting.
We demonstrate the method's effectiveness by comparing it to existing approaches.
arXiv Detail & Related papers (2025-03-09T19:48:00Z) - SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events [53.79905461386883]
Event cameras, with a high dynamic range exceeding $120dB$, significantly outperform traditional embedded cameras.<n>We propose a novel research question: how to employ events to enhance and adaptively adjust the brightness of images captured under broad lighting conditions.<n>Our framework captures color through sensor patterns, uses cross-attention to model events as a brightness dictionary, and adjusts the image's dynamic range to form a broad light-range representation.
arXiv Detail & Related papers (2025-02-28T14:55:37Z) - Decoupling Appearance Variations with 3D Consistent Features in Gaussian Splatting [50.98884579463359]
We propose DAVIGS, a method that decouples appearance variations in a plug-and-play manner.
By transforming the rendering results at the image level instead of the Gaussian level, our approach can model appearance variations with minimal optimization time and memory overhead.
We validate our method on several appearance-variant scenes, and demonstrate that it achieves state-of-the-art rendering quality with minimal training time and memory usage.
arXiv Detail & Related papers (2025-01-18T14:55:58Z) - PEP-GS: Perceptually-Enhanced Precise Structured 3D Gaussians for View-Adaptive Rendering [7.1029808965488686]
3D Gaussian Splatting (3D-GS) has achieved significant success in real-time, high-quality 3D scene rendering.<n>However, it faces several challenges, including Gaussian redundancy, limited ability to capture view-dependent effects, and difficulties in handling complex lighting and specular reflections.<n>We introduce PEP-GS, a perceptually-enhanced framework that dynamically predicts Gaussian attributes, including opacity, color, and covariance.
arXiv Detail & Related papers (2024-11-08T17:42:02Z) - DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - GS-IR: 3D Gaussian Splatting for Inverse Rendering [71.14234327414086]
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS)
We extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions.
The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering.
arXiv Detail & Related papers (2023-11-26T02:35:09Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.