Robust 3D Gaussian Splatting for Novel View Synthesis in Presence of Distractors
- URL: http://arxiv.org/abs/2408.11697v1
- Date: Wed, 21 Aug 2024 15:21:27 GMT
- Title: Robust 3D Gaussian Splatting for Novel View Synthesis in Presence of Distractors
- Authors: Paul Ungermann, Armin Ettenhofer, Matthias Nießner, Barbara Roessle,
- Abstract summary: 3D Gaussian Splatting has shown impressive novel view synthesis results.
It is vulnerable to dynamic objects polluting the input data of an otherwise static scene, so called distractors.
We show that our approach is robust to various distractors and strongly improves rendering quality on distractor-polluted scenes.
- Score: 44.55317154371679
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Gaussian Splatting has shown impressive novel view synthesis results; nonetheless, it is vulnerable to dynamic objects polluting the input data of an otherwise static scene, so called distractors. Distractors have severe impact on the rendering quality as they get represented as view-dependent effects or result in floating artifacts. Our goal is to identify and ignore such distractors during the 3D Gaussian optimization to obtain a clean reconstruction. To this end, we take a self-supervised approach that looks at the image residuals during the optimization to determine areas that have likely been falsified by a distractor. In addition, we leverage a pretrained segmentation network to provide object awareness, enabling more accurate exclusion of distractors. This way, we obtain segmentation masks of distractors to effectively ignore them in the loss formulation. We demonstrate that our approach is robust to various distractors and strongly improves rendering quality on distractor-polluted scenes, improving PSNR by 1.86dB compared to 3D Gaussian Splatting.
Related papers
- RobustSplat: Decoupling Densification and Dynamics for Transient-Free 3DGS [79.15416002879239]
3D Gaussian Splatting has gained significant attention for its real-time, photo-realistic rendering in novel-view synthesis and 3D modeling.<n>Existing methods struggle with accurately modeling scenes affected by transient objects, leading to artifacts in the rendered images.<n>We propose RobustSplat, a robust solution based on two critical designs.
arXiv Detail & Related papers (2025-06-03T11:13:48Z) - AAA-Gaussians: Anti-Aliased and Artifact-Free 3D Gaussian Rendering [8.972911362220803]
We introduce an adaptive 3D smoothing filter to mitigate aliasing and present a stable view-space bounding method.
Our evaluations further demonstrate the effective removal of aliasing, distortions, and popping artifacts, ensuring real-time, artifact-free rendering.
arXiv Detail & Related papers (2025-04-17T10:16:47Z) - GaussianFocus: Constrained Attention Focus for 3D Gaussian Splatting [5.759434800012218]
3D Gaussian Splatting technique delivers top-tier rendering quality and efficiency.
However, the method tends to generate excessive redundant noisy Gaussians overfitted to every training view.
We introduce GaussianFocus, an innovative approach that incorporates a patch attention algorithm to refine rendering quality.
arXiv Detail & Related papers (2025-03-22T15:18:23Z) - MVGSR: Multi-View Consistency Gaussian Splatting for Robust Surface Reconstruction [46.081262181141504]
3D Gaussian Splatting (3DGS) has gained significant attention for its high-quality rendering capabilities, ultra-fast training, and inference speeds.
We propose Multi-View Consistency Gaussian Splatting for the domain of Robust Surface Reconstruction (textbfMVGSR)
MVGSR achieves competitive geometric accuracy and rendering fidelity compared to the state-of-the-art surface reconstruction algorithms.
arXiv Detail & Related papers (2025-03-11T06:53:27Z) - Distractor-free Generalizable 3D Gaussian Splatting [26.762275313390194]
We present DGGS, a novel framework that addresses the previously unexplored challenge: $textbfDistractor-free Generalizable 3D Gaussian Splatting$ (3DGS)<n>It mitigates 3D inconsistency and training instability caused by distractor data in the cross-scenes generalizable train setting.<n>Our generalizable mask prediction even achieves an accuracy superior to existing scene-specific training methods.
arXiv Detail & Related papers (2024-11-26T17:17:41Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - SpotlessSplats: Ignoring Distractors in 3D Gaussian Splatting [44.42317312908314]
3D Gaussian Splatting (3DGS) is a promising technique for 3D reconstruction, offering efficient training and rendering speeds.
Current methods require highly controlled environments to meet the inter-view consistency assumption of 3DGS.
We present SpotLessSplats, an approach that leverages pre-trained and general-purpose features coupled with robust optimization to effectively ignore transient distractors.
arXiv Detail & Related papers (2024-06-28T17:07:11Z) - PruNeRF: Segment-Centric Dataset Pruning via 3D Spatial Consistency [33.68948881727943]
PruNeRF is a segment-centric dataset pruning framework via 3D spatial consistency.
Our experiments on benchmark datasets demonstrate that PruNeRF consistently outperforms state-of-the-art methods in robustness against distractors.
arXiv Detail & Related papers (2024-06-02T16:49:05Z) - A Refined 3D Gaussian Representation for High-Quality Dynamic Scene Reconstruction [2.022451212187598]
In recent years, Neural Radiance Fields (NeRF) has revolutionized three-dimensional (3D) reconstruction with its implicit representation.
3D Gaussian Splatting (3D-GS) has departed from the implicit representation of neural networks and instead directly represents scenes as point clouds with Gaussian-shaped distributions.
This paper purposes a refined 3D Gaussian representation for high-quality dynamic scene reconstruction.
Experimental results demonstrate that our method surpasses existing approaches in rendering quality and speed, while significantly reducing the memory usage associated with 3D-GS.
arXiv Detail & Related papers (2024-05-28T07:12:22Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields [12.92658687936068]
We take advantage of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs.
We learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction.
rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints.
arXiv Detail & Related papers (2023-06-09T17:12:35Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.