LL-Gaussian: Low-Light Scene Reconstruction and Enhancement via Gaussian Splatting for Novel View Synthesis
- URL: http://arxiv.org/abs/2504.10331v3
- Date: Sat, 19 Apr 2025 12:09:25 GMT
- Title: LL-Gaussian: Low-Light Scene Reconstruction and Enhancement via Gaussian Splatting for Novel View Synthesis
- Authors: Hao Sun, Fenggen Yu, Huiyao Xu, Tao Zhang, Changqing Zou,
- Abstract summary: Novel view synthesis (NVS) in low-light scenes remains a significant challenge due to degraded inputs.<n>We propose LL-Gaussian, a novel framework for 3D reconstruction and enhancement from low-light sRGB images.<n>Compared to state-of-the-art NeRF-based methods, LL-Gaussian achieves up to 2,000 times faster inference and reduces training time to just 2%.
- Score: 17.470869402542533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel view synthesis (NVS) in low-light scenes remains a significant challenge due to degraded inputs characterized by severe noise, low dynamic range (LDR) and unreliable initialization. While recent NeRF-based approaches have shown promising results, most suffer from high computational costs, and some rely on carefully captured or pre-processed data--such as RAW sensor inputs or multi-exposure sequences--which severely limits their practicality. In contrast, 3D Gaussian Splatting (3DGS) enables real-time rendering with competitive visual fidelity; however, existing 3DGS-based methods struggle with low-light sRGB inputs, resulting in unstable Gaussian initialization and ineffective noise suppression. To address these challenges, we propose LL-Gaussian, a novel framework for 3D reconstruction and enhancement from low-light sRGB images, enabling pseudo normal-light novel view synthesis. Our method introduces three key innovations: 1) an end-to-end Low-Light Gaussian Initialization Module (LLGIM) that leverages dense priors from learning-based MVS approach to generate high-quality initial point clouds; 2) a dual-branch Gaussian decomposition model that disentangles intrinsic scene properties (reflectance and illumination) from transient interference, enabling stable and interpretable optimization; 3) an unsupervised optimization strategy guided by both physical constrains and diffusion prior to jointly steer decomposition and enhancement. Additionally, we contribute a challenging dataset collected in extreme low-light environments and demonstrate the effectiveness of LL-Gaussian. Compared to state-of-the-art NeRF-based methods, LL-Gaussian achieves up to 2,000 times faster inference and reduces training time to just 2%, while delivering superior reconstruction and rendering quality.
Related papers
- Diffusion-Guided Gaussian Splatting for Large-Scale Unconstrained 3D Reconstruction and Novel View Synthesis [22.767866875051013]
We propose GS-Diff, a novel 3DGS framework guided by a multi-view diffusion model to address limitations of current methods.<n>By generating pseudo-observations conditioned on multi-view inputs, our method transforms under-constrained 3D reconstruction problems into well-posed ones.<n> Experiments on four benchmarks demonstrate that GS-Diff consistently outperforms state-of-the-art baselines by significant margins.
arXiv Detail & Related papers (2025-04-02T17:59:46Z) - LITA-GS: Illumination-Agnostic Novel View Synthesis via Reference-Free 3D Gaussian Splatting and Physical Priors [14.160637565120231]
LITA-GS is a novel illumination-agnostic novel view synthesis method via reference-free 3DGS and physical priors.<n>We develop the lighting-agnostic structure rendering strategy, which facilitates the optimization of the scene structure and object appearance.<n>We adopt the unsupervised strategy for the training of LITA-GS and extensive experiments demonstrate that LITA-GS surpasses the state-of-the-art (SOTA) NeRF-based method.
arXiv Detail & Related papers (2025-03-31T20:44:39Z) - LLGS: Unsupervised Gaussian Splatting for Image Enhancement and Reconstruction in Pure Dark Environment [18.85235185556243]
We propose an unsupervised multi-view stereoscopic system based on 3D Gaussian Splatting.<n>This system aims to enhance images in low-light environments while reconstructing the scene.<n> Experiments conducted on real-world datasets demonstrate that our system outperforms state-of-the-art methods in both low-light enhancement and 3D Gaussian Splatting.
arXiv Detail & Related papers (2025-03-24T13:05:05Z) - Uncertainty-Aware Normal-Guided Gaussian Splatting for Surface Reconstruction from Sparse Image Sequences [21.120659841877508]
3D Gaussian Splatting (3DGS) has achieved impressive rendering performance in novel view synthesis.<n>We propose Uncertainty-aware Normal-Guided Gaussian Splatting (UNG-GS) to quantify geometric uncertainty within the 3DGS pipeline.<n>UNG-GS significantly outperforms state-of-the-art methods in both sparse and dense sequences.
arXiv Detail & Related papers (2025-03-14T08:18:12Z) - MuDG: Taming Multi-modal Diffusion with Gaussian Splatting for Urban Scene Reconstruction [44.592566642185425]
MuDG is an innovative framework that integrates Multi-modal Diffusion model with Gaussian Splatting (GS) for Urban Scene Reconstruction.<n>We show that MuDG outperforms existing methods in both reconstruction and photorealistic synthesis quality.
arXiv Detail & Related papers (2025-03-13T17:48:41Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis [53.702118455883095]
We propose a novel method for synthesizing novel views from sparse views with Gaussian Splatting.
Our key idea lies in exploring the self-supervisions inherent in the binocular stereo consistency between each pair of binocular images.
Our method significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-10-24T15:10:27Z) - MCGS: Multiview Consistency Enhancement for Sparse-View 3D Gaussian Radiance Fields [73.49548565633123]
Radiance fields represented by 3D Gaussians excel at synthesizing novel views, offering both high training efficiency and fast rendering.
Existing methods often incorporate depth priors from dense estimation networks but overlook the inherent multi-view consistency in input images.
We propose a view framework based on 3D Gaussian Splatting, named MCGS, enabling scene reconstruction from sparse input views.
arXiv Detail & Related papers (2024-10-15T08:39:05Z) - Uncertainty-guided Optimal Transport in Depth Supervised Sparse-View 3D Gaussian [49.21866794516328]
3D Gaussian splatting has demonstrated impressive performance in real-time novel view synthesis.
Previous approaches have incorporated depth supervision into the training of 3D Gaussians to mitigate overfitting.
We introduce a novel method to supervise the depth distribution of 3D Gaussians, utilizing depth priors with integrated uncertainty estimates.
arXiv Detail & Related papers (2024-05-30T03:18:30Z) - GS-IR: 3D Gaussian Splatting for Inverse Rendering [71.14234327414086]
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS)
We extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions.
The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering.
arXiv Detail & Related papers (2023-11-26T02:35:09Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.