StreetSurfGS: Scalable Urban Street Surface Reconstruction with Planar-based Gaussian Splatting
- URL: http://arxiv.org/abs/2410.04354v2
- Date: Sat, 19 Oct 2024 09:45:46 GMT
- Title: StreetSurfGS: Scalable Urban Street Surface Reconstruction with Planar-based Gaussian Splatting
- Authors: Xiao Cui, Weicai Ye, Yifan Wang, Guofeng Zhang, Wengang Zhou, Houqiang Li,
- Abstract summary: StreetSurfGS is first method to employ Gaussian Splatting specifically tailored for scalable urban street scene surface reconstruction.
StreetSurfGS utilizes a planar-based octree representation and segmented training to reduce memory costs, accommodate unique camera characteristics, and ensure scalability.
To address sparse views and multi-scale challenges, we use a dual-step matching strategy that leverages adjacent and long-term information.
- Score: 85.67616000086232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing urban street scenes is crucial due to its vital role in applications such as autonomous driving and urban planning. These scenes are characterized by long and narrow camera trajectories, occlusion, complex object relationships, and data sparsity across multiple scales. Despite recent advancements, existing surface reconstruction methods, which are primarily designed for object-centric scenarios, struggle to adapt effectively to the unique characteristics of street scenes. To address this challenge, we introduce StreetSurfGS, the first method to employ Gaussian Splatting specifically tailored for scalable urban street scene surface reconstruction. StreetSurfGS utilizes a planar-based octree representation and segmented training to reduce memory costs, accommodate unique camera characteristics, and ensure scalability. Additionally, to mitigate depth inaccuracies caused by object overlap, we propose a guided smoothing strategy within regularization to eliminate inaccurate boundary points and outliers. Furthermore, to address sparse views and multi-scale challenges, we use a dual-step matching strategy that leverages adjacent and long-term information. Extensive experiments validate the efficacy of StreetSurfGS in both novel view synthesis and surface reconstruction.
Related papers
- CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes [53.107474952492396]
CityGaussianV2 is a novel approach for large-scale scene reconstruction.
We implement a decomposed-gradient-based densification and depth regression technique to eliminate blurry artifacts and accelerate convergence.
Our method strikes a promising balance between visual quality, geometric accuracy, as well as storage and training costs.
arXiv Detail & Related papers (2024-11-01T17:59:31Z) - GigaGS: Scaling up Planar-Based 3D Gaussians for Large Scene Surface Reconstruction [71.08607897266045]
3D Gaussian Splatting (3DGS) has shown promising performance in novel view synthesis.
We make the first attempt to tackle the challenging task of large-scale scene surface reconstruction.
We propose GigaGS, the first work for high-quality surface reconstruction for large-scale scenes using 3DGS.
arXiv Detail & Related papers (2024-09-10T17:51:39Z) - Simultaneous Map and Object Reconstruction [66.66729715211642]
We present a method for dynamic surface reconstruction of large-scale urban scenes from LiDAR.
We take inspiration from recent novel view synthesis methods and pose the reconstruction problem as a global optimization.
By careful modeling of continuous-time motion, our reconstructions can compensate for the rolling shutter effects of rotating LiDAR sensors.
arXiv Detail & Related papers (2024-06-19T23:53:31Z) - RoGS: Large Scale Road Surface Reconstruction based on 2D Gaussian Splatting [11.471631481453715]
Road surface reconstruction plays a crucial role in autonomous driving.
We propose a novel large-scale road surface reconstruction approach based on 2D Gaussian Splatting (2DGS), named RoGS.
We achieve excellent results in reconstruction of roads surfaces in a variety of challenging real-world scenes.
arXiv Detail & Related papers (2024-05-23T09:11:47Z) - EMIE-MAP: Large-Scale Road Surface Reconstruction Based on Explicit Mesh and Implicit Encoding [21.117919848535422]
EMIE-MAP is a novel method for large-scale road surface reconstruction based on explicit mesh and implicit encoding.
Our method achieves remarkable road surface reconstruction performance in a variety of real-world challenging scenarios.
arXiv Detail & Related papers (2024-03-18T13:46:52Z) - GeoGaussian: Geometry-aware Gaussian Splatting for Scene Rendering [83.19049705653072]
During the Gaussian Splatting optimization process, the scene's geometry can gradually deteriorate if its structure is not deliberately preserved.
We propose a novel approach called GeoGaussian to mitigate this issue.
Our proposed pipeline achieves state-of-the-art performance in novel view synthesis and geometric reconstruction.
arXiv Detail & Related papers (2024-03-17T20:06:41Z) - SCILLA: SurfaCe Implicit Learning for Large Urban Area, a volumetric hybrid solution [4.216707699421813]
SCILLA is a new hybrid implicit surface learning method to reconstruct large driving scenes from 2D images.
We show that SCILLA can learn an accurate and detailed 3D surface scene representation in various urban scenarios.
arXiv Detail & Related papers (2024-03-15T14:31:17Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - StreetSurf: Extending Multi-view Implicit Surface Reconstruction to
Street Views [6.35910814268525]
We present a novel multi-view implicit surface reconstruction technique, termed StreetSurf.
It is readily applicable to street view images in widely-used autonomous driving datasets, without necessarily requiring LiDAR data.
We achieve state of the art reconstruction quality in both geometry and appearance within only one to two hours of training time.
arXiv Detail & Related papers (2023-06-08T07:19:27Z) - Automated Urban Planning aware Spatial Hierarchies and Human
Instructions [33.06221365923015]
We propose a novel, deep, human-instructed urban planner based on generative adversarial networks (GANs)
GANs build urban functional zones based on information from human instructions and surrounding contexts.
We conduct extensive experiments to validate the efficacy of our work.
arXiv Detail & Related papers (2022-09-26T20:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.