Evaluating Alternatives to SFM Point Cloud Initialization for Gaussian Splatting
- URL: http://arxiv.org/abs/2404.12547v3
- Date: Thu, 23 May 2024 18:15:38 GMT
- Title: Evaluating Alternatives to SFM Point Cloud Initialization for Gaussian Splatting
- Authors: Yalda Foroutan, Daniel Rebain, Kwang Moo Yi, Andrea Tagliasacchi,
- Abstract summary: 3D Gaussian Splatting has been embraced as a versatile and effective method for scene reconstruction and novel view synthesis.
Structure-from-Motion (SFM) algorithms is a significant limitation to be overcome.
We show how NeRF reconstructions can be utilized to bypass the dependency on SFM data.
- Score: 31.724777502129918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting has recently been embraced as a versatile and effective method for scene reconstruction and novel view synthesis, owing to its high-quality results and compatibility with hardware rasterization. Despite its advantages, Gaussian Splatting's reliance on high-quality point cloud initialization by Structure-from-Motion (SFM) algorithms is a significant limitation to be overcome. To this end, we investigate various initialization strategies for Gaussian Splatting and delve into how volumetric reconstructions from Neural Radiance Fields (NeRF) can be utilized to bypass the dependency on SFM data. Our findings demonstrate that random initialization can perform much better if carefully designed and that by employing a combination of improved initialization strategies and structure distillation from low-cost NeRF models, it is possible to achieve equivalent results, or at times even superior, to those obtained from SFM initialization. Source code is available at https://theialab.github.io/nerf-3dgs .
Related papers
- DenseSplat: Densifying Gaussian Splatting SLAM with Neural Radiance Prior [3.2544733184776304]
We introduce DenseSplat, the first SLAM system that effectively combines the advantages of NeRF and 3DGS.
DenseSplat utilizes sparses and NeRF priors for initializing primitives that achieves densely populate maps and seamlessly fill gaps.
It also implements geometry-aware primitive sampling and pruning strategies to manage and enhance rendering efficiency.
arXiv Detail & Related papers (2025-02-13T09:41:08Z) - GP-GS: Gaussian Processes for Enhanced Gaussian Splatting [10.45038376276218]
This paper proposes a novel 3D reconstruction framework that achieves adaptive and uncertainty-guided densification of sparse SfM point clouds.
The pipeline utilizes uncertainty estimates to guide the pruning of high-variance predictions.
Experiments conducted on synthetic and real-world datasets validate the effectiveness and practicality of the proposed framework.
arXiv Detail & Related papers (2025-02-04T12:50:16Z) - EasySplat: View-Adaptive Learning makes 3D Gaussian Splatting Easy [34.27245715540978]
We introduce a novel framework EasySplat to achieve high-quality 3DGS modeling.
We propose an efficient grouping strategy based on view similarity, and use robust pointmap priors to obtain high-quality point clouds.
After obtaining a reliable scene structure, we propose a novel densification approach that adaptively splits Gaussian primitives based on the average shape of neighboring Gaussian ellipsoids.
arXiv Detail & Related papers (2025-01-02T01:56:58Z) - MCGS: Multiview Consistency Enhancement for Sparse-View 3D Gaussian Radiance Fields [73.49548565633123]
Radiance fields represented by 3D Gaussians excel at synthesizing novel views, offering both high training efficiency and fast rendering.
Existing methods often incorporate depth priors from dense estimation networks but overlook the inherent multi-view consistency in input images.
We propose a view framework based on 3D Gaussian Splatting, named MCGS, enabling scene reconstruction from sparse input views.
arXiv Detail & Related papers (2024-10-15T08:39:05Z) - R$^2$-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction [53.19869886963333]
3D Gaussian splatting (3DGS) has shown promising results in rendering image and surface reconstruction.
This paper introduces R2$-Gaussian, the first 3DGS-based framework for sparse-view tomographic reconstruction.
arXiv Detail & Related papers (2024-05-31T08:39:02Z) - Relaxing Accurate Initialization Constraint for 3D Gaussian Splatting [29.58220473268378]
We propose a novel optimization strategy dubbed RAIN-GS (Relaing Accurate Initialization Constraint for 3D Gaussian Splatting)
RAIN-GS successfully trains 3D Gaussians from sub-optimal point cloud (e.g., randomly point cloud)
We demonstrate the efficacy of our strategy through quantitative and qualitative comparisons on multiple datasets, where RAIN-GS trained with random point cloud achieves performance on-par with or even better than 3DGS trained with accurate SfM point cloud.
arXiv Detail & Related papers (2024-03-14T14:04:21Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.