FreeSplat++: Generalizable 3D Gaussian Splatting for Efficient Indoor Scene Reconstruction
- URL: http://arxiv.org/abs/2503.22986v1
- Date: Sat, 29 Mar 2025 06:22:08 GMT
- Title: FreeSplat++: Generalizable 3D Gaussian Splatting for Efficient Indoor Scene Reconstruction
- Authors: Yunsong Wang, Tianxin Huang, Hanlin Chen, Gim Hee Lee,
- Abstract summary: FreeSplat++ is an alternative approach to large-scale indoor whole-scene reconstruction.<n>Our method with depth-regularized per-scene fine-tuning demonstrates substantial improvements in reconstruction accuracy and a notable reduction in training time.
- Score: 50.534213038479926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the integration of the efficient feed-forward scheme into 3D Gaussian Splatting (3DGS) has been actively explored. However, most existing methods focus on sparse view reconstruction of small regions and cannot produce eligible whole-scene reconstruction results in terms of either quality or efficiency. In this paper, we propose FreeSplat++, which focuses on extending the generalizable 3DGS to become an alternative approach to large-scale indoor whole-scene reconstruction, which has the potential of significantly accelerating the reconstruction speed and improving the geometric accuracy. To facilitate whole-scene reconstruction, we initially propose the Low-cost Cross-View Aggregation framework to efficiently process extremely long input sequences. Subsequently, we introduce a carefully designed pixel-wise triplet fusion method to incrementally aggregate the overlapping 3D Gaussian primitives from multiple views, adaptively reducing their redundancy. Furthermore, we propose a weighted floater removal strategy that can effectively reduce floaters, which serves as an explicit depth fusion approach that is crucial in whole-scene reconstruction. After the feed-forward reconstruction of 3DGS primitives, we investigate a depth-regularized per-scene fine-tuning process. Leveraging the dense, multi-view consistent depth maps obtained during the feed-forward prediction phase for an extra constraint, we refine the entire scene's 3DGS primitive to enhance rendering quality while preserving geometric accuracy. Extensive experiments confirm that our FreeSplat++ significantly outperforms existing generalizable 3DGS methods, especially in whole-scene reconstructions. Compared to conventional per-scene optimized 3DGS approaches, our method with depth-regularized per-scene fine-tuning demonstrates substantial improvements in reconstruction accuracy and a notable reduction in training time.
Related papers
- StreamGS: Online Generalizable Gaussian Splatting Reconstruction for Unposed Image Streams [32.91936079359693]
We propose StreamGS, an online generalizable 3DGS reconstruction method for unposed image streams.<n>StreamGS transforms image streams to 3D Gaussian streams by predicting and aggregating per-frame Gaussians.<n>Experiments on diverse datasets have demonstrated that StreamGS achieves quality on par with optimization-based approaches but does so 150 times faster.
arXiv Detail & Related papers (2025-03-08T14:35:39Z) - PG-SAG: Parallel Gaussian Splatting for Fine-Grained Large-Scale Urban Buildings Reconstruction via Semantic-Aware Grouping [6.160345720038265]
We introduce a parallel Gaussian splatting method, termed PG-SAG, which fully exploits semantic cues for both partitioning and kernel optimization.
Experiments are tested on various urban datasets, the results demonstrated the superior performance of our PG-SAG on building surface reconstruction.
arXiv Detail & Related papers (2025-01-03T07:40:16Z) - ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery [11.706262924395768]
3D-GS often struggles to capture rich details and complete geometry.<n>We introduce a novel densification method, residual split, which adds a downscaled Gaussian as a residual.<n>Our approach is capable of adaptively retrieving details and complementing missing geometry while enabling progressive refinement.
arXiv Detail & Related papers (2024-12-10T13:19:27Z) - DyGASR: Dynamic Generalized Exponential Splatting with Surface Alignment for Accelerated 3D Mesh Reconstruction [1.2891210250935148]
We propose DyGASR, which utilizes generalized exponential function instead of traditional 3D Gaussian to decrease the number of particles.
We also introduce Generalized Surface Regularization (GSR), which reduces the smallest scaling vector of each point cloud to zero.
Our approach surpasses existing 3DGS-based mesh reconstruction methods, demonstrating a 25% increase in speed, and a 30% reduction in memory usage.
arXiv Detail & Related papers (2024-11-14T03:19:57Z) - TranSplat: Generalizable 3D Gaussian Splatting from Sparse Multi-View Images with Transformers [14.708092244093665]
We develop a strategy that utilizes a predicted depth confidence map to guide accurate local feature matching.
We present a novel G-3DGS method named TranSplat, which obtains the best performance on both the RealEstate10K and ACID benchmarks.
arXiv Detail & Related papers (2024-08-25T08:37:57Z) - LP-3DGS: Learning to Prune 3D Gaussian Splatting [71.97762528812187]
We propose learning-to-prune 3DGS, where a trainable binary mask is applied to the importance score that can find optimal pruning ratio automatically.
Experiments have shown that LP-3DGS consistently produces a good balance that is both efficient and high quality.
arXiv Detail & Related papers (2024-05-29T05:58:34Z) - SAGS: Structure-Aware 3D Gaussian Splatting [53.6730827668389]
We propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene.
SAGS reflects to state-of-the-art rendering performance and reduced storage requirements on benchmark novel-view synthesis datasets.
arXiv Detail & Related papers (2024-04-29T23:26:30Z) - InstantSplat: Sparse-view Gaussian Splatting in Seconds [91.77050739918037]
We introduce InstantSplat, a novel approach for addressing sparse-view 3D scene reconstruction at lightning-fast speed.
InstantSplat employs a self-supervised framework that optimize 3D scene representation and camera poses.
It achieves an acceleration of over 30x in reconstruction and improves visual quality (SSIM) from 0.3755 to 0.7624 compared to traditional SfM with 3D-GS.
arXiv Detail & Related papers (2024-03-29T17:29:58Z) - GaussianPro: 3D Gaussian Splatting with Progressive Propagation [49.918797726059545]
3DGS relies heavily on the point cloud produced by Structure-from-Motion (SfM) techniques.
We propose a novel method that applies a progressive propagation strategy to guide the densification of the 3D Gaussians.
Our method significantly surpasses 3DGS on the dataset, exhibiting an improvement of 1.15dB in terms of PSNR.
arXiv Detail & Related papers (2024-02-22T16:00:20Z) - GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting [51.96353586773191]
We introduce textbfGS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping system.
Our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering.
Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets.
arXiv Detail & Related papers (2023-11-20T12:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.