S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction
- URL: http://arxiv.org/abs/2503.08217v1
- Date: Tue, 11 Mar 2025 09:37:13 GMT
- Title: S3R-GS: Streamlining the Pipeline for Large-Scale Street Scene Reconstruction
- Authors: Guangting Zheng, Jiajun Deng, Xiaomeng Chu, Yu Yuan, Houqiang Li, Yanyong Zhang,
- Abstract summary: 3D Gaussian Splatting (3DGS) has reshaped the field of 3D reconstruction, achieving impressive rendering quality and speed.<n>Existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases.<n>We propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction.
- Score: 58.37746062258149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, 3D Gaussian Splatting (3DGS) has reshaped the field of photorealistic 3D reconstruction, achieving impressive rendering quality and speed. However, when applied to large-scale street scenes, existing methods suffer from rapidly escalating per-viewpoint reconstruction costs as scene size increases, leading to significant computational overhead. After revisiting the conventional pipeline, we identify three key factors accounting for this issue: unnecessary local-to-global transformations, excessive 3D-to-2D projections, and inefficient rendering of distant content. To address these challenges, we propose S3R-GS, a 3DGS framework that Streamlines the pipeline for large-scale Street Scene Reconstruction, effectively mitigating these limitations. Moreover, most existing street 3DGS methods rely on ground-truth 3D bounding boxes to separate dynamic and static components, but 3D bounding boxes are difficult to obtain, limiting real-world applicability. To address this, we propose an alternative solution with 2D boxes, which are easier to annotate or can be predicted by off-the-shelf vision foundation models. Such designs together make S3R-GS readily adapt to large, in-the-wild scenarios. Extensive experiments demonstrate that S3R-GS enhances rendering quality and significantly accelerates reconstruction. Remarkably, when applied to videos from the challenging Argoverse2 dataset, it achieves state-of-the-art PSNR and SSIM, reducing reconstruction time to below 50%--and even 20%--of competing methods.
Related papers
- FreeSplat++: Generalizable 3D Gaussian Splatting for Efficient Indoor Scene Reconstruction [50.534213038479926]
FreeSplat++ is an alternative approach to large-scale indoor whole-scene reconstruction.
Our method with depth-regularized per-scene fine-tuning demonstrates substantial improvements in reconstruction accuracy and a notable reduction in training time.
arXiv Detail & Related papers (2025-03-29T06:22:08Z) - CoSurfGS:Collaborative 3D Surface Gaussian Splatting with Distributed Learning for Large Scene Reconstruction [68.81212850946318]
We propose a multi-agent collaborative fast 3DGS surface reconstruction framework based on distributed learning for large-scale surface reconstruction.<n>Specifically, we develop local model compression (LMC) and model aggregation schemes (MAS) to achieve high-quality surface representation of large scenes.<n>Our proposed method can achieve fast and scalable high-fidelity surface reconstruction and photorealistic rendering.
arXiv Detail & Related papers (2024-12-23T14:31:15Z) - T-3DGS: Removing Transient Objects for 3D Scene Reconstruction [83.05271859398779]
Transient objects in video sequences can significantly degrade the quality of 3D scene reconstructions.<n>We propose T-3DGS, a novel framework that robustly filters out transient distractors during 3D reconstruction using Gaussian Splatting.
arXiv Detail & Related papers (2024-11-29T07:45:24Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.
3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors [13.191199172286508]
Novel-view synthesis aims to generate novel views of a scene from multiple input images or videos.
3DGS-Enhancer is a novel pipeline for enhancing the representation quality of 3DGS representations.
arXiv Detail & Related papers (2024-10-21T17:59:09Z) - SpikeGS: Reconstruct 3D scene via fast-moving bio-inspired sensors [25.51366779254847]
Spike Gausian Splatting (SpikeGS) is a framework that integrates spike streams into 3DGS pipeline to reconstruct 3D scenes via a fast-moving bio-inspired camera.<n>SpikeGS extracts detailed geometry and texture from high temporal resolution but texture lacking spike stream, reconstructs 3D scenes captured in 1 second.
arXiv Detail & Related papers (2024-07-04T09:32:12Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.