DenoiseGS: Gaussian Reconstruction Model for Burst Denoising
- URL: http://arxiv.org/abs/2511.22939v2
- Date: Mon, 01 Dec 2025 14:55:49 GMT
- Title: DenoiseGS: Gaussian Reconstruction Model for Burst Denoising
- Authors: Yongsen Cheng, Yuanhao Cai, Yulun Zhang,
- Abstract summary: We propose DenoiseGS, the first framework to leverage the efficiency of 3D Gaussian Splatting for burst denoising.<n>Our approach addresses two key challenges when applying feedforward Gaussian reconsturction model to noisy inputs.<n>Experiments demonstrate that DenoiseGS significantly exceeds the state-of-the-art NeRF-based methods on both burst denoising and novel view synthesis.
- Score: 29.507111067242118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Burst denoising methods are crucial for enhancing images captured on handheld devices, but they often struggle with large motion or suffer from prohibitive computational costs. In this paper, we propose DenoiseGS, the first framework to leverage the efficiency of 3D Gaussian Splatting for burst denoising. Our approach addresses two key challenges when applying feedforward Gaussian reconsturction model to noisy inputs: the degradation of Gaussian point clouds and the loss of fine details. To this end, we propose a Gaussian self-consistency (GSC) loss, which regularizes the geometry predicted from noisy inputs with high-quality Gaussian point clouds. These point clouds are generated from clean inputs by the same model that we are training, thereby alleviating potential bias or domain gaps. Additionally, we introduce a log-weighted frequency (LWF) loss to strengthen supervision within the spectral domain, effectively preserving fine-grained details. The LWF loss adaptively weights frequency discrepancies in a logarithmic manner, emphasizing challenging high-frequency details. Extensive experiments demonstrate that DenoiseGS significantly exceeds the state-of-the-art NeRF-based methods on both burst denoising and novel view synthesis under noisy conditions, while achieving 250$\times$ faster inference speed. Code and models are released at https://github.com/yscheng04/DenoiseGS.
Related papers
- GuidNoise: Single-Pair Guided Diffusion for Generalized Noise Synthesis [9.253859022117306]
Single-Pair Guided Diffusion for generalized noise synthesis GuidNoise.<n>GuidNoise uses a single noisy/clean pair as the guidance, often easily obtained by itself within a training set.<n>Uses a guidance-aware affine feature modification (GAFM) and a noise-aware refine loss to leverage the inherent potential of diffusion models.
arXiv Detail & Related papers (2025-12-04T05:00:00Z) - Integrating Reweighted Least Squares with Plug-and-Play Diffusion Priors for Noisy Image Restoration [6.402777145722335]
We propose a plug-and-play image restoration framework based on generative diffusion priors for robust removal of general noise types, including impulse noise.<n> Experimental results on benchmark datasets demonstrate that the proposed method effectively removes non-Gaussian impulse noise and achieves superior restoration performance.
arXiv Detail & Related papers (2025-11-10T08:11:20Z) - U-CAN: Unsupervised Point Cloud Denoising with Consistency-Aware Noise2Noise Matching [87.76453413654922]
We introduce U-CAN, an Unsupervised framework for point cloud denoising with Consistency-Aware Noise2Noise matching.<n>Specifically, we leverage a neural network to infer a multi-step denoising path for each point of a shape or scene with a noise to noise matching scheme.<n>We introduce a novel constraint on the denoised geometry consistency for learning consistency-aware denoising patterns.
arXiv Detail & Related papers (2025-10-29T06:20:21Z) - Mitigating the Noise Shift for Denoising Generative Models via Noise Awareness Guidance [54.88271057438763]
Noise Awareness Guidance (NAG) is a correction method that explicitly steers sampling trajectories to remain consistent with the pre-defined noise schedule.<n>NAG consistently mitigates noise shift and substantially improves the generation quality of mainstream diffusion models.
arXiv Detail & Related papers (2025-10-14T13:31:34Z) - Noise Augmented Fine Tuning for Mitigating Hallucinations in Large Language Models [1.0579965347526206]
Large language models (LLMs) often produce inaccurate or misleading content-hallucinations.<n>Noise-Augmented Fine-Tuning (NoiseFiT) is a novel framework that leverages adaptive noise injection to enhance model robustness.<n>NoiseFiT selectively perturbs layers identified as either high-SNR (more robust) or low-SNR (potentially under-regularized) using a dynamically scaled Gaussian noise.
arXiv Detail & Related papers (2025-04-04T09:27:19Z) - Dark Noise Diffusion: Noise Synthesis for Low-Light Image Denoising [22.897202020483576]
Low-light photography produces images with low signal-to-noise ratios due to limited photons.<n>Deep-learning methods perform well, but they require large datasets of paired images that are impractical to acquire.<n>In this paper, we investigate the ability of diffusion models to capture the complex distribution of low-light noise.
arXiv Detail & Related papers (2025-03-14T10:16:54Z) - A Generative Model for Digital Camera Noise Synthesis [12.236112464800403]
We propose an effective generative model which utilizes clean features as guidance followed by noise injections into the network.
Specifically, our generator follows a UNet-like structure with skip connections but without downsampling and upsampling layers.
We show that our proposed approach outperforms existing methods for synthesizing camera noise.
arXiv Detail & Related papers (2023-03-16T10:17:33Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly Detection [80.20339155618612]
DiffusionAD is a novel anomaly detection pipeline comprising a reconstruction sub-network and a segmentation sub-network.<n>A rapid one-step denoising paradigm achieves hundreds of times acceleration while preserving comparable reconstruction quality.<n>Considering the diversity in the manifestation of anomalies, we propose a norm-guided paradigm to integrate the benefits of multiple noise scales.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - A Data-driven Loss Weighting Scheme across Heterogeneous Tasks for Image Denoising [67.02529586335473]
In variational denoising models, weight in the data fidelity term plays the role of enhancing the noise-removal capability.<n>In this work, we propose a data-driven loss weighting scheme to address these issues.<n> Numerical results verify the remarkable performance of DLW on improving the ability of various variational denoising models to handle different complex noise.
arXiv Detail & Related papers (2022-12-09T03:28:07Z) - High-Order Qubit Dephasing at Sweet Spots by Non-Gaussian Fluctuators:
Symmetry Breaking and Floquet Protection [55.41644538483948]
We study the qubit dephasing caused by the non-Gaussian fluctuators.
We predict a symmetry-breaking effect that is unique to the non-Gaussian noise.
arXiv Detail & Related papers (2022-06-06T18:02:38Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.