OpenNavMap: Structure-Free Topometric Mapping via Large-Scale Collaborative Localization
- URL: http://arxiv.org/abs/2601.12291v1
- Date: Sun, 18 Jan 2026 07:24:46 GMT
- Title: OpenNavMap: Structure-Free Topometric Mapping via Large-Scale Collaborative Localization
- Authors: Jianhao Jiao, Changkun Liu, Jingwen Yu, Boyi Liu, Qianyi Zhang, Yue Wang, Dimitrios Kanoulas,
- Abstract summary: OpenNavMap is a lightweight, structure-free topometric system leveraging 3D geometric foundation models for on-demand reconstruction.<n>Our method unifies dynamic programming-based sequence matching, geometric verification, and confidence-calibrated optimization to robust, coarse-to-fine submap alignment.<n> Evaluations on the Map-Free benchmark demonstrate superior accuracy over structure-from-motion and regression baselines, achieving an average translation error of 0.62m.
- Score: 12.686154192361913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scalable and maintainable map representations are fundamental to enabling large-scale visual navigation and facilitating the deployment of robots in real-world environments. While collaborative localization across multi-session mapping enhances efficiency, traditional structure-based methods struggle with high maintenance costs and fail in feature-less environments or under significant viewpoint changes typical of crowd-sourced data. To address this, we propose OPENNAVMAP, a lightweight, structure-free topometric system leveraging 3D geometric foundation models for on-demand reconstruction. Our method unifies dynamic programming-based sequence matching, geometric verification, and confidence-calibrated optimization to robust, coarse-to-fine submap alignment without requiring pre-built 3D models. Evaluations on the Map-Free benchmark demonstrate superior accuracy over structure-from-motion and regression baselines, achieving an average translation error of 0.62m. Furthermore, the system maintains global consistency across 15km of multi-session data with an absolute trajectory error below 3m for map merging. Finally, we validate practical utility through 12 successful autonomous image-goal navigation tasks on simulated and physical robots. Code and datasets will be publicly available in https://rpl-cs-ucl.github.io/OpenNavMap_page.
Related papers
- Interacted Planes Reveal 3D Line Mapping [73.60851338875962]
LiP-Map is a line-plane joint optimization framework for 3D line mapping.<n>On more than 100 scenes from ScanNetV2, ScanNet++, Hypersim, 7Scenes, and Tanks&Temple, LiP-Map improves both accuracy and completeness over state-of-the-art methods.
arXiv Detail & Related papers (2026-02-01T15:52:55Z) - CSMapping: Scalable Crowdsourced Semantic Mapping and Topology Inference for Autonomous Driving [23.921417146230738]
CSMapping produces accurate semantic maps and topological road centerlines.<n>Experiments on nuScenes, Argoverse 2, and a large proprietary dataset achieve state-of-the-art semantic and topological mapping performance.
arXiv Detail & Related papers (2025-12-03T07:06:18Z) - TALO: Pushing 3D Vision Foundation Models Towards Globally Consistent Online Reconstruction [57.46712611558817]
3D vision foundation models have shown strong generalization in reconstructing key 3D attributes from uncalibrated images through a single feed-forward pass.<n>Recent strategies align consecutive predictions by solving global transformation, yet our analysis reveals their fundamental limitations in assumption validity, local alignment scope, and robustness under noisy geometry.<n>We propose a higher-DOF and long-term alignment framework based on Thin Plate Spline, leveraging globally propagated control points to correct spatially varying inconsistencies.
arXiv Detail & Related papers (2025-12-02T02:22:20Z) - FOM-Nav: Frontier-Object Maps for Object Goal Navigation [65.76906445210112]
FOM-Nav is a framework that enhances exploration efficiency through Frontier-Object Maps and vision-language models.<n>To train FOM-Nav, we automatically construct large-scale navigation datasets from real-world scanned environments.<n> FOM-Nav achieves state-of-the-art performance on the MP3D and HM3D benchmarks, particularly in navigation efficiency metric SPL.
arXiv Detail & Related papers (2025-11-30T18:16:09Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - GenMapping: Unleashing the Potential of Inverse Perspective Mapping for Robust Online HD Map Construction [20.1127163541618]
We have designed a universal map generation framework, GenMapping.
The framework is established with a triadic synergy architecture, including principal and dual auxiliary branches.
A thorough array of experimental results shows that the proposed model surpasses current state-of-the-art methods in both semantic mapping and vectorized mapping, while also maintaining a rapid inference speed.
arXiv Detail & Related papers (2024-09-13T10:15:28Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - Constructing Metric-Semantic Maps using Floor Plan Priors for Long-Term
Indoor Localization [29.404446814219202]
In this paper, we address the task of constructing a metric-semantic map for the purpose of long-term object-based localization.
We exploit 3D object detections from monocular RGB frames for both, the object-based map construction, and for globally localizing in the constructed map.
We evaluate our map construction in an office building, and test our long-term localization approach on challenging sequences recorded in the same environment over nine months.
arXiv Detail & Related papers (2023-03-20T09:33:05Z) - SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit
Neural Representations [37.733802382489515]
This paper addresses the problems of achieving large-scale 3D reconstructions with implicit representations using 3D LiDAR measurements.
We learn and store implicit features through an octree-based hierarchical structure, which is sparse and sparse.
Our experiments show that our 3D reconstructions are more accurate, complete, and memory-efficient than current state-of-the-art 3D mapping methods.
arXiv Detail & Related papers (2022-10-05T14:38:49Z) - Lightweight Object-level Topological Semantic Mapping and Long-term
Global Localization based on Graph Matching [19.706907816202946]
We present a novel lightweight object-level mapping and localization method with high accuracy and robustness.
We use object-level features with both semantic and geometric information to model landmarks in the environment.
Based on the proposed map, the robust localization is achieved by constructing a novel local semantic scene graph descriptor.
arXiv Detail & Related papers (2022-01-16T05:47:07Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.