D$^2$GS: Dense Depth Regularization for LiDAR-free Urban Scene Reconstruction
- URL: http://arxiv.org/abs/2510.25173v2
- Date: Sun, 02 Nov 2025 05:42:18 GMT
- Title: D$^2$GS: Dense Depth Regularization for LiDAR-free Urban Scene Reconstruction
- Authors: Kejing Xia, Jidong Jia, Ke Jin, Yucai Bai, Li Sun, Dacheng Tao, Youjian Zhang,
- Abstract summary: We propose D$2$GS, a LiDARfree urban scene reconstruction framework.<n>We obtain geometry priors that are as effective as LiDAR while being denser and more accurate.<n>Our method consistently outperforms state-of-the-art methods.
- Score: 42.71951611524765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Gaussian Splatting (GS) has shown great potential for urban scene reconstruction in the field of autonomous driving. However, current urban scene reconstruction methods often depend on multimodal sensors as inputs, \textit{i.e.} LiDAR and images. Though the geometry prior provided by LiDAR point clouds can largely mitigate ill-posedness in reconstruction, acquiring such accurate LiDAR data is still challenging in practice: i) precise spatiotemporal calibration between LiDAR and other sensors is required, as they may not capture data simultaneously; ii) reprojection errors arise from spatial misalignment when LiDAR and cameras are mounted at different locations. To avoid the difficulty of acquiring accurate LiDAR depth, we propose D$^2$GS, a LiDAR-free urban scene reconstruction framework. In this work, we obtain geometry priors that are as effective as LiDAR while being denser and more accurate. $\textbf{First}$, we initialize a dense point cloud by back-projecting multi-view metric depth predictions. This point cloud is then optimized by a Progressive Pruning strategy to improve the global consistency. $\textbf{Second}$, we jointly refine Gaussian geometry and predicted dense metric depth via a Depth Enhancer. Specifically, we leverage diffusion priors from a depth foundation model to enhance the depth maps rendered by Gaussians. In turn, the enhanced depths provide stronger geometric constraints during Gaussian training. $\textbf{Finally}$, we improve the accuracy of ground geometry by constraining the shape and normal attributes of Gaussians within road regions. Extensive experiments on the Waymo dataset demonstrate that our method consistently outperforms state-of-the-art methods, producing more accurate geometry even when compared with those using ground-truth LiDAR data.
Related papers
- LiDAR-GS++:Improving LiDAR Gaussian Reconstruction via Diffusion Priors [51.724649822336346]
We present LiDAR-GS++, a reconstruction method enhanced by diffusion priors for real-time and high-fidelity re-simulation.<n>Specifically, we introduce a controllable LiDAR generation model conditioned on coarsely extrapolated rendering to produce extra geometry-consistent scans.<n>By extending reconstruction to under-fitted regions, our approach ensures global geometric consistency for extrapolative novel views.
arXiv Detail & Related papers (2025-11-15T17:33:12Z) - GauSSmart: Enhanced 3D Reconstruction through 2D Foundation Models and Geometric Filtering [50.675710727721786]
We propose GauSSmart, a hybrid method that bridges 2D foundational models and 3D Gaussian Splatting reconstruction.<n>Our approach integrates established 2D computer vision techniques, including convex filtering and semantic feature supervision.<n>We validate our approach across three datasets, where GauSSmart consistently outperforms existing Gaussian Splatting.
arXiv Detail & Related papers (2025-10-16T03:38:26Z) - Splat-LOAM: Gaussian Splatting LiDAR Odometry and Mapping [13.068061145084707]
We build on recent advancements in Gaussian Splatting methods to develop a novel LiDAR odometry and mapping pipeline.<n>Our approach matches the current registration performance, while achieving SOTA results for mapping tasks with minimal GPU requirements.
arXiv Detail & Related papers (2025-03-21T19:00:30Z) - GS-SDF: LiDAR-Augmented Gaussian Splatting and Neural SDF for Geometrically Consistent Rendering and Reconstruction [12.293953058837653]
Digital twins are fundamental to the development of autonomous driving and embodied artificial intelligence.<n>We propose a unified LiDAR-visual system that synergizes Gaussian splatting with a neural signed distance field.<n> Experiments demonstrate superior reconstruction accuracy and rendering quality across diverse trajectories.
arXiv Detail & Related papers (2025-03-13T08:53:38Z) - Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction [11.623790902144165]
2D Gaussian Splatting (2DGS) has demonstrated superior geometry reconstruction quality than the popular 3DGS.<n>However, it falls short when dealing with glossy surfaces, resulting in visible holes in these areas.<n>We find that the reflection discontinuity causes the issue. To fit the jump from diffuse to specular reflection at different viewing angles, depth bias is introduced in the optimized Gaussian primitives.
arXiv Detail & Related papers (2025-03-09T12:38:01Z) - CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes [53.107474952492396]
CityGaussianV2 is a novel approach for large-scale scene reconstruction.<n>We implement a decomposed-gradient-based densification and depth regression technique to eliminate blurry artifacts and accelerate convergence.<n>Our method strikes a promising balance between visual quality, geometric accuracy, as well as storage and training costs.
arXiv Detail & Related papers (2024-11-01T17:59:31Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [53.58528891081709]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - UltraLiDAR: Learning Compact Representations for LiDAR Completion and
Generation [51.443788294845845]
We present UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR generation, and LiDAR manipulation.
We show that by aligning the representation of a sparse point cloud to that of a dense point cloud, we can densify the sparse point clouds.
By learning a prior over the discrete codebook, we can generate diverse, realistic LiDAR point clouds for self-driving.
arXiv Detail & Related papers (2023-11-02T17:57:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.