VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction
- URL: http://arxiv.org/abs/2402.17427v1
- Date: Tue, 27 Feb 2024 11:40:50 GMT
- Title: VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction
- Authors: Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue
Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, Wenming Yang
- Abstract summary: We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
- Score: 59.40711222096875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing NeRF-based methods for large scene reconstruction often have
limitations in visual quality and rendering speed. While the recent 3D Gaussian
Splatting works well on small-scale and object-centric scenes, scaling it up to
large scenes poses challenges due to limited video memory, long optimization
time, and noticeable appearance variations. To address these challenges, we
present VastGaussian, the first method for high-quality reconstruction and
real-time rendering on large scenes based on 3D Gaussian Splatting. We propose
a progressive partitioning strategy to divide a large scene into multiple
cells, where the training cameras and point cloud are properly distributed with
an airspace-aware visibility criterion. These cells are merged into a complete
scene after parallel optimization. We also introduce decoupled appearance
modeling into the optimization process to reduce appearance variations in the
rendered images. Our approach outperforms existing NeRF-based methods and
achieves state-of-the-art results on multiple large scene datasets, enabling
fast optimization and high-fidelity real-time rendering.
Related papers
- Decoupling Appearance Variations with 3D Consistent Features in Gaussian Splatting [50.98884579463359]
We propose DAVIGS, a method that decouples appearance variations in a plug-and-play manner.
By transforming the rendering results at the image level instead of the Gaussian level, our approach can model appearance variations with minimal optimization time and memory overhead.
We validate our method on several appearance-variant scenes, and demonstrate that it achieves state-of-the-art rendering quality with minimal training time and memory usage.
arXiv Detail & Related papers (2025-01-18T14:55:58Z) - Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse Primitives [60.217580865237835]
3D Gaussian Splatting (3D-GS) is a recent 3D scene reconstruction technique that enables real-time rendering of novel views by modeling scenes as parametric point clouds of differentiable 3D Gaussians.
We identify and address two key inefficiencies in 3D-GS, achieving substantial improvements in rendering speed, model size, and training time.
Our Speedy-Splat approach combines these techniques to accelerate average rendering speed by a drastic $6.71times$ across scenes from the Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets with $10.6times$ fewer primitives than 3
arXiv Detail & Related papers (2024-11-30T20:25:56Z) - SCube: Instant Large-Scale Scene Reconstruction using VoxSplats [55.383993296042526]
We present SCube, a novel method for reconstructing large-scale 3D scenes (geometry, appearance, and semantics) from a sparse set of posed images.
Our method encodes reconstructed scenes using a novel representation VoxSplat, which is a set of 3D Gaussians supported on a high-resolution sparse-voxel scaffold.
arXiv Detail & Related papers (2024-10-26T00:52:46Z) - GaRField++: Reinforced Gaussian Radiance Fields for Large-Scale 3D Scene Reconstruction [1.7624442706463355]
This paper proposes a novel framework for large-scale scene reconstruction based on 3D Gaussian splatting (3DGS)
For tackling the scalability issue, we split the large scene into multiple cells, and the candidate point-cloud and camera views of each cell are correlated.
We show that our method consistently generates more high-fidelity rendering results than state-of-the-art methods of large-scale scene reconstruction.
arXiv Detail & Related papers (2024-09-19T13:43:31Z) - SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction [24.33543853742041]
3D Gaussian Splatting (3DGS) has emerged as a practical and scalable reconstruction method.
We propose an optimization strategy that effectively regularizes splat features by modeling them as the outputs of a corresponding implicit neural field.
Our approach effectively handles static and dynamic cases, as demonstrated by extensive testing across different setups and scene complexities.
arXiv Detail & Related papers (2024-09-17T14:04:20Z) - Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields [13.729716867839509]
We propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance.
In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2024-08-07T14:56:34Z) - MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo [54.00987996368157]
We present MVSGaussian, a new generalizable 3D Gaussian representation approach derived from Multi-View Stereo (MVS)
MVSGaussian achieves real-time rendering with better synthesis quality for each scene.
arXiv Detail & Related papers (2024-05-20T17:59:30Z) - DN-Splatter: Depth and Normal Priors for Gaussian Splatting and Meshing [19.437747560051566]
We propose an adaptive depth loss based on the gradient of color images, improving depth estimation and novel view synthesis results over various baselines.
Our simple yet effective regularization technique enables direct mesh extraction from the Gaussian representation, yielding more physically accurate reconstructions of indoor scenes.
arXiv Detail & Related papers (2024-03-26T16:00:31Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.