Faster and Better 3D Splatting via Group Training
- URL: http://arxiv.org/abs/2412.07608v1
- Date: Tue, 10 Dec 2024 15:47:17 GMT
- Title: Faster and Better 3D Splatting via Group Training
- Authors: Chengbo Wang, Guozheng Ma, Yifei Xue, Yizhen Lao,
- Abstract summary: Group Training is a strategy that organizes Gaussian primitives into manageable groups, optimizing training efficiency and improving rendering quality.<n>This approach shows universal compatibility with existing 3DGS frameworks, including vanilla 3DGS and Mip-Splatting.<n>Experiments reveal that our straightforward Group Training strategy achieves up to 30% faster convergence and improved rendering quality across diverse scenarios.
- Score: 4.7913404251054335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has emerged as a powerful technique for novel view synthesis, demonstrating remarkable capability in high-fidelity scene reconstruction through its Gaussian primitive representations. However, the computational overhead induced by the massive number of primitives poses a significant bottleneck to training efficiency. To overcome this challenge, we propose Group Training, a simple yet effective strategy that organizes Gaussian primitives into manageable groups, optimizing training efficiency and improving rendering quality. This approach shows universal compatibility with existing 3DGS frameworks, including vanilla 3DGS and Mip-Splatting, consistently achieving accelerated training while maintaining superior synthesis quality. Extensive experiments reveal that our straightforward Group Training strategy achieves up to 30% faster convergence and improved rendering quality across diverse scenarios.
Related papers
- DashGaussian: Optimizing 3D Gaussian Splatting in 200 Seconds [71.37326848614133]
We propose DashGaussian, a scheduling scheme over the optimization complexity of 3DGS.
We show that our method accelerates the optimization of various 3DGS backbones by 45.7% on average.
arXiv Detail & Related papers (2025-03-24T07:17:27Z) - ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes [81.48624894781257]
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required.
Recent methods address this issue by compressing the storage size of densified Gaussians, yet fail to preserve rendering quality and efficiency.
We propose ProtoGS to learn Gaussian prototypes to represent Gaussian primitives, significantly reducing the total Gaussian amount without sacrificing visual quality.
arXiv Detail & Related papers (2025-03-21T18:55:14Z) - Gaussian On-the-Fly Splatting: A Progressive Framework for Robust Near Real-Time 3DGS Optimization [8.422116335889163]
This work introduces On-the-Fly GS, a framework enabling near real-time 3DGS optimization during image capture.
As each image arrives, its pose and sparse points are updated via on-the-fly SfM, and newly optimized Gaussians are immediately integrated into the 3DGS field.
Experiments on multiple benchmark datasets show that our On-the-Fly GS reduces training time significantly, optimizing each new image in seconds with minimal rendering loss.
arXiv Detail & Related papers (2025-03-17T11:47:58Z) - StructGS: Adaptive Spherical Harmonics and Rendering Enhancements for Superior 3D Gaussian Splatting [5.759434800012218]
StructGS is a framework that enhances 3D Gaussian Splatting (3DGS) for improved novel-view synthesis in 3D reconstruction.
Our framework significantly reduces computational redundancy, enhances detail capture and supports high-resolution rendering from low-resolution inputs.
arXiv Detail & Related papers (2025-03-09T05:39:44Z) - HiCoM: Hierarchical Coherent Motion for Streamable Dynamic Scene with 3D Gaussian Splatting [7.507657419706855]
This paper proposes an efficient framework, dubbed HiCoM, with three key components.
First, we construct a compact and robust initial 3DGS representation using a perturbation smoothing strategy.
Next, we introduce a Hierarchical Coherent Motion mechanism that leverages the inherent non-uniform distribution and local consistency of 3D Gaussians.
Experiments conducted on two widely used datasets show that our framework improves learning efficiency of the state-of-the-art methods by about $20%$.
arXiv Detail & Related papers (2024-11-12T04:40:27Z) - MCGS: Multiview Consistency Enhancement for Sparse-View 3D Gaussian Radiance Fields [73.49548565633123]
Radiance fields represented by 3D Gaussians excel at synthesizing novel views, offering both high training efficiency and fast rendering.
Existing methods often incorporate depth priors from dense estimation networks but overlook the inherent multi-view consistency in input images.
We propose a view framework based on 3D Gaussian Splatting, named MCGS, enabling scene reconstruction from sparse input views.
arXiv Detail & Related papers (2024-10-15T08:39:05Z) - SuperGS: Super-Resolution 3D Gaussian Splatting via Latent Feature Field and Gradient-guided Splitting [3.5757604402398697]
SuperResolution 3DGS (SuperGS) is an expansion of 3DGS designed with a two-stage coarse-to-fine training framework.
SuperGS surpasses state-of-the-art HRNVS methods on challenging real-world datasets using only low-resolution inputs.
arXiv Detail & Related papers (2024-10-03T15:18:28Z) - MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis [22.80370814838661]
Recent works in volume rendering, textite.g. NeRF and 3D Gaussian Splatting (3DGS), significantly advance the rendering quality and efficiency.
We propose a new 3DGS optimization method embodying four key novel contributions.
arXiv Detail & Related papers (2024-10-02T23:48:31Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - LP-3DGS: Learning to Prune 3D Gaussian Splatting [71.97762528812187]
We propose learning-to-prune 3DGS, where a trainable binary mask is applied to the importance score that can find optimal pruning ratio automatically.
Experiments have shown that LP-3DGS consistently produces a good balance that is both efficient and high quality.
arXiv Detail & Related papers (2024-05-29T05:58:34Z) - Bootstrap-GS: Self-Supervised Augmentation for High-Fidelity Gaussian Splatting [9.817215106596146]
3D-GS faces limitations when generating novel views that significantly deviate from those encountered during training.
We introduce a bootstrapping framework to address this problem.
Our approach synthesizes pseudo-ground truth from novel views that align with the limited training set.
arXiv Detail & Related papers (2024-04-29T12:57:05Z) - GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time [112.32349668385635]
GGRt is a novel approach to generalizable novel view synthesis that alleviates the need for real camera poses.
As the first pose-free generalizable 3D-GS framework, GGRt achieves inference at $ge$ 5 FPS and real-time rendering at $ge$ 100 FPS.
arXiv Detail & Related papers (2024-03-15T09:47:35Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.