3DGS_LSR:Large_Scale Relocation for Autonomous Driving Based on 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2507.05661v1
- Date: Tue, 08 Jul 2025 04:43:46 GMT
- Title: 3DGS_LSR:Large_Scale Relocation for Autonomous Driving Based on 3D Gaussian Splatting
- Authors: Haitao Lu, Haijier Chen, Haoze Liu, Shoujian Zhang, Bo Xu, Ziao Liu,
- Abstract summary: 3DGS-LSR: a large-scale relocalization framework leveraging 3DGS, enabling centimeter-level positioning using only a single monocular RGB image on the client side.<n>We combine multi-sensor data to construct high-accuracy 3DGS maps in large outdoor scenes, while the robot-side localization requires just a standard camera input.<n>Our core innovation is an iterative optimization strategy that refines localization results through step-by-step rendering, making it suitable for real-time autonomous navigation.
- Score: 3.2768514034762863
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In autonomous robotic systems, precise localization is a prerequisite for safe navigation. However, in complex urban environments, GNSS positioning often suffers from signal occlusion and multipath effects, leading to unreliable absolute positioning. Traditional mapping approaches are constrained by storage requirements and computational inefficiency, limiting their applicability to resource-constrained robotic platforms. To address these challenges, we propose 3DGS-LSR: a large-scale relocalization framework leveraging 3D Gaussian Splatting (3DGS), enabling centimeter-level positioning using only a single monocular RGB image on the client side. We combine multi-sensor data to construct high-accuracy 3DGS maps in large outdoor scenes, while the robot-side localization requires just a standard camera input. Using SuperPoint and SuperGlue for feature extraction and matching, our core innovation is an iterative optimization strategy that refines localization results through step-by-step rendering, making it suitable for real-time autonomous navigation. Experimental validation on the KITTI dataset demonstrates our 3DGS-LSR achieves average positioning accuracies of 0.026m, 0.029m, and 0.081m in town roads, boulevard roads, and traffic-dense highways respectively, significantly outperforming other representative methods while requiring only monocular RGB input. This approach provides autonomous robots with reliable localization capabilities even in challenging urban environments where GNSS fails.
Related papers
- IGL-Nav: Incremental 3D Gaussian Localization for Image-goal Navigation [78.00035681410348]
IGL-Nav is an Incremental 3D Gaussian framework for efficient and 3D-aware image-goal navigation.<n>It can handle the more challenging free-view image-goal setting and be deployed on real-world robotic platform.
arXiv Detail & Related papers (2025-08-01T17:59:56Z) - GSFusion:Globally Optimized LiDAR-Inertial-Visual Mapping for Gaussian Splatting [20.665306154754564]
We propose GSFusion, an online LiDAR-Inertial-Visual mapping system that ensures high-precision map consistency through a surfel-to-surfel constraint in the global pose-graph optimization.<n> Experiments on public and our datasets demonstrate our system outperforms existing 3DGS SLAM systems in terms of rendering quality and map-building efficiency.
arXiv Detail & Related papers (2025-07-31T06:15:51Z) - APR-Transformer: Initial Pose Estimation for Localization in Complex Environments through Absolute Pose Regression [3.2584852202495806]
In this paper, we introduce APR-Transformer, a model architecture inspired by state-of-the-art methods.<n>We demonstrate that our proposed method achieves state-of-the-art performance on established benchmark datasets.
arXiv Detail & Related papers (2025-05-14T13:06:42Z) - GS-LTS: 3D Gaussian Splatting-Based Adaptive Modeling for Long-Term Service Robots [33.19663755125912]
3D Gaussian Splatting (3DGS) has garnered significant attention in robotics for its explicit, high fidelity dense scene representation.<n>We propose GS-LTS (Gaussian Splatting for Long-Term Service), a 3DGS-based system enabling indoor robots to manage diverse tasks in dynamic environments over time.
arXiv Detail & Related papers (2025-03-22T11:26:47Z) - GSplatLoc: Ultra-Precise Camera Localization via 3D Gaussian Splatting [0.0]
We present GSplatLoc, a camera localization method that leverages the differentiable rendering capabilities of 3D Gaussian splatting for ultra-precise pose estimation.<n> GSplatLoc sets a new benchmark for localization in dense mapping, with important implications for applications requiring accurate real-time localization, such as robotics and augmented reality.
arXiv Detail & Related papers (2024-12-28T07:14:14Z) - GSPR: Multimodal Place Recognition Using 3D Gaussian Splatting for Autonomous Driving [9.023864430027333]
We propose a 3D Gaussian Splatting-based multimodal place recognition network dubbed GPSR.<n>It explicitly combines multi-view RGB images and LiDAR point clouds into atemporally unified scene representation with the Multimodal Gaussian Splatting.<n>Our method can effectively leverage complementary strengths of both multi-view cameras and LiDAR, achieving SOTA place recognition performance while maintaining solid generalization ability.
arXiv Detail & Related papers (2024-10-01T00:43:45Z) - SplatLoc: 3D Gaussian Splatting-based Visual Localization for Augmented Reality [50.179377002092416]
We propose an efficient visual localization method capable of high-quality rendering with fewer parameters.
Our method achieves superior or comparable rendering and localization performance to state-of-the-art implicit-based visual localization approaches.
arXiv Detail & Related papers (2024-09-21T08:46:16Z) - GOI: Find 3D Gaussians of Interest with an Optimizable Open-vocabulary Semantic-space Hyperplane [53.388937705785025]
3D open-vocabulary scene understanding is crucial for advancing augmented reality and robotic applications.
We introduce GOI, a framework that integrates semantic features from 2D vision-language foundation models into 3D Gaussian Splatting (3DGS)
Our method treats the feature selection process as a hyperplane division within the feature space, retaining only features that are highly relevant to the query.
arXiv Detail & Related papers (2024-05-27T18:57:18Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Automated classification of pre-defined movement patterns: A comparison
between GNSS and UWB technology [55.41644538483948]
Real-time location systems (RTLS) allow for collecting data from human movement patterns.
The current study aims to design and evaluate an automated framework to classify human movement patterns in small areas.
arXiv Detail & Related papers (2023-03-10T14:46:42Z) - Progressive Coordinate Transforms for Monocular 3D Object Detection [52.00071336733109]
We propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
In this paper, we propose a novel and lightweight approach, dubbed em Progressive Coordinate Transforms (PCT) to facilitate learning coordinate representations.
arXiv Detail & Related papers (2021-08-12T15:22:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.