DropoutGS: Dropping Out Gaussians for Better Sparse-view Rendering
- URL: http://arxiv.org/abs/2504.09491v1
- Date: Sun, 13 Apr 2025 09:17:21 GMT
- Title: DropoutGS: Dropping Out Gaussians for Better Sparse-view Rendering
- Authors: Yexing Xu, Longguang Wang, Minglin Chen, Sheng Ao, Li Li, Yulan Guo,
- Abstract summary: 3D Gaussian Splatting (3DGS) has demonstrated promising results in novel view synthesis.<n>As the number of training views decreases, the novel view synthesis task degrades to a highly under-determined problem.<n>We propose a Random Dropout Regularization (RDR) to exploit the advantages of low-complexity models to alleviate overfitting.<n>In addition, to remedy the lack of high-frequency details for these models, an Edge-guided Splitting Strategy (ESS) is developed.
- Score: 45.785618745095164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although 3D Gaussian Splatting (3DGS) has demonstrated promising results in novel view synthesis, its performance degrades dramatically with sparse inputs and generates undesirable artifacts. As the number of training views decreases, the novel view synthesis task degrades to a highly under-determined problem such that existing methods suffer from the notorious overfitting issue. Interestingly, we observe that models with fewer Gaussian primitives exhibit less overfitting under sparse inputs. Inspired by this observation, we propose a Random Dropout Regularization (RDR) to exploit the advantages of low-complexity models to alleviate overfitting. In addition, to remedy the lack of high-frequency details for these models, an Edge-guided Splitting Strategy (ESS) is developed. With these two techniques, our method (termed DropoutGS) provides a simple yet effective plug-in approach to improve the generalization performance of existing 3DGS methods. Extensive experiments show that our DropoutGS produces state-of-the-art performance under sparse views on benchmark datasets including Blender, LLFF, and DTU. The project page is at: https://xuyx55.github.io/DropoutGS/.
Related papers
- DropGaussian: Structural Regularization for Sparse-view Gaussian Splatting [5.216151302783165]
This paper introduces a prior-free method, so-called DropGaussian, with simple changes in 3D Gaussian splatting.<n>Specifically, we randomly remove Gaussians during the training process in a similar way of dropout, which allows non-excluded Gaussians to have larger gradients.<n>Such simple operation effectively alleviates the overfitting problem and enhances the quality of novel view synthesis.
arXiv Detail & Related papers (2025-04-01T13:23:34Z) - ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes [81.48624894781257]
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.<n>Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.<n>We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
arXiv Detail & Related papers (2025-03-21T18:55:14Z) - Taming Video Diffusion Prior with Scene-Grounding Guidance for 3D Gaussian Splatting from Sparse Inputs [28.381287866505637]
We propose a reconstruction by generation pipeline that leverages learned priors from video diffusion models to provide plausible interpretations for regions outside the field of view or occluded.<n>We introduce a novel scene-grounding guidance based on rendered sequences from an optimized 3DGS, which tames the diffusion model to generate consistent sequences.<n>Our method significantly improves upon the baseline and achieves state-of-the-art performance on challenging benchmarks.
arXiv Detail & Related papers (2025-03-07T01:59:05Z) - See In Detail: Enhancing Sparse-view 3D Gaussian Splatting with Local Depth and Semantic Regularization [14.239772421978373]
3D Gaussian Splatting (3DGS) has shown remarkable performance in novel view synthesis.<n>However, its rendering quality deteriorates with sparse inphut views, leading to distorted content and reduced details.<n>We propose a sparse-view 3DGS method, incorporating prior information is crucial.<n>Our method outperforms state-of-the-art novel view synthesis approaches, achieving up to 0.4dB improvement in terms of PSNR on the LLFF dataset.
arXiv Detail & Related papers (2025-01-20T14:30:38Z) - Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis [55.561961365113554]
3D Gaussian Splatting (3DGS) has demonstrated remarkable effectiveness in novel view synthesis (NVS)
In this paper, we introduce Self-Ensembling Gaussian Splatting (SE-GS)
We achieve self-ensembling by incorporating an uncertainty-aware perturbation strategy during training.
Experimental results on the LLFF, Mip-NeRF360, DTU, and MVImgNet datasets demonstrate that our approach enhances NVS quality under few-shot training conditions.
arXiv Detail & Related papers (2024-10-31T18:43:48Z) - DOGS: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus [56.45194233357833]
We propose DoGaussian, a method that trains 3DGS distributedly.
Our method accelerates the training of 3DGS by 6+ times when evaluated on large-scale scenes.
arXiv Detail & Related papers (2024-05-22T19:17:58Z) - Bootstrap-GS: Self-Supervised Augmentation for High-Fidelity Gaussian Splatting [9.817215106596146]
3D-GS faces limitations when generating novel views that significantly deviate from those encountered during training.<n>We introduce a bootstrapping framework to address this problem.<n>Our approach synthesizes pseudo-ground truth from novel views that align with the limited training set.
arXiv Detail & Related papers (2024-04-29T12:57:05Z) - AbsGS: Recovering Fine Details for 3D Gaussian Splatting [10.458776364195796]
3D Gaussian Splatting (3D-GS) technique couples 3D primitives with differentiable Gaussianization to achieve high-quality novel view results.
However, 3D-GS frequently suffers from over-reconstruction issue in intricate scenes containing high-frequency details, leading to blurry rendered images.
We present a comprehensive analysis of the cause of aforementioned artifacts, namely gradient collision.
Our strategy efficiently identifies large Gaussians in over-reconstructed regions, and recovers fine details by splitting.
arXiv Detail & Related papers (2024-04-16T11:44:12Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.