Keyframe-based Dense Mapping with the Graph of View-Dependent Local Maps
- URL: http://arxiv.org/abs/2601.08520v1
- Date: Tue, 13 Jan 2026 13:02:22 GMT
- Title: Keyframe-based Dense Mapping with the Graph of View-Dependent Local Maps
- Authors: Krzysztof Zielinski, Dominik Belter,
- Abstract summary: The proposed method updates local Normal Distribution Transform maps (NDT) using data from an RGB-D sensor.<n>The cells of the NDT are stored in 2D view-dependent structures to better utilize the properties and uncertainty model of RGB-D cameras.
- Score: 1.2031796234206136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we propose a new keyframe-based mapping system. The proposed method updates local Normal Distribution Transform maps (NDT) using data from an RGB-D sensor. The cells of the NDT are stored in 2D view-dependent structures to better utilize the properties and uncertainty model of RGB-D cameras. This method naturally represents an object closer to the camera origin with higher precision. The local maps are stored in the pose graph which allows correcting global map after loop closure detection. We also propose a procedure that allows merging and filtering local maps to obtain a global map of the environment. Finally, we compare our method with Octomap and NDT-OM and provide example applications of the proposed mapping method.
Related papers
- PointMapPolicy: Structured Point Cloud Processing for Multi-Modal Imitation Learning [35.5287060355186]
Current point cloud methods struggle to capture fine-grained detail, especially for complex tasks.<n>We introduce PointMapPolicy, a novel approach that conditions diffusion policies on structured grids of points.<n>Our model efficiently fuses the point maps with RGB data for enhanced multi-modal perception.
arXiv Detail & Related papers (2025-10-23T10:17:01Z) - Splat-SLAM: Globally Optimized RGB-only SLAM with 3D Gaussians [87.48403838439391]
3D Splatting has emerged as a powerful representation of geometry and appearance for RGB-only dense Simultaneous SLAM.
We propose the first RGB-only SLAM system with a dense 3D Gaussian map representation.
Our experiments on the Replica, TUM-RGBD, and ScanNet datasets indicate the effectiveness of globally optimized 3D Gaussians.
arXiv Detail & Related papers (2024-05-26T12:26:54Z) - 3DGS-ReLoc: 3D Gaussian Splatting for Map Representation and Visual ReLocalization [13.868258945395326]
This paper presents a novel system designed for 3D mapping and visual relocalization using 3D Gaussian Splatting.
Our proposed method uses LiDAR and camera data to create accurate and visually plausible representations of the environment.
arXiv Detail & Related papers (2024-03-17T23:06:12Z) - Loopy-SLAM: Dense Neural SLAM with Loop Closures [53.11936461015725]
We introduce Loopy-SLAM that globally optimize poses and the dense 3D model.
We use frame-to-model tracking using a data-driven point-based submap generation method and trigger loop closures online by performing global place recognition.
Evaluation on the synthetic Replica and real-world TUM-RGBD and ScanNet datasets demonstrate competitive or superior performance in tracking, mapping, and rendering accuracy when compared to existing dense neural RGBD SLAM methods.
arXiv Detail & Related papers (2024-02-14T18:18:32Z) - Dense RGB SLAM with Neural Implicit Maps [34.37572307973734]
We present a dense RGB SLAM method with neural implicit map representation.
Our method simultaneously solves the camera motion and the neural implicit map by matching the rendered and input video frames.
Our method achieves favorable results than previous methods and even surpasses some recent RGB-D SLAM methods.
arXiv Detail & Related papers (2023-01-21T09:54:07Z) - HPointLoc: Point-based Indoor Place Recognition using Synthetic RGB-D
Images [58.720142291102135]
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment.
The dataset is based on the popular Habitat simulator, in which it is possible to generate indoor scenes using both own sensor data and open datasets.
arXiv Detail & Related papers (2022-12-30T12:20:56Z) - On the Limits of Pseudo Ground Truth in Visual Camera Re-localisation [83.29404673257328]
Re-localisation benchmarks measure how well each method replicates the results of a reference algorithm.
This begs the question whether the choice of the reference algorithm favours a certain family of re-localisation methods.
This paper analyzes two widely used re-localisation datasets and shows that evaluation outcomes indeed vary with the choice of the reference algorithm.
arXiv Detail & Related papers (2021-09-01T12:01:08Z) - Object-Augmented RGB-D SLAM for Wide-Disparity Relocalisation [3.888848425698769]
We propose a novel object-augmented RGB-D SLAM system that is capable of constructing a consistent object map and performing relocalisation based on centroids of objects in the map.
arXiv Detail & Related papers (2021-08-05T11:02:25Z) - Gaussian Process Gradient Maps for Loop-Closure Detection in
Unstructured Planetary Environments [17.276441789710574]
The ability to recognize previously mapped locations is an essential feature for autonomous systems.
Unstructured planetary-like environments pose a major challenge to these systems due to the similarity of the terrain.
This paper presents a method to solve the loop closure problem using only spatial information.
arXiv Detail & Related papers (2020-09-01T04:41:40Z) - Graph-PCNN: Two Stage Human Pose Estimation with Graph Pose Refinement [54.29252286561449]
We propose a two-stage graph-based and model-agnostic framework, called Graph-PCNN.
In the first stage, heatmap regression network is applied to obtain a rough localization result, and a set of proposal keypoints, called guided points, are sampled.
In the second stage, for each guided point, different visual feature is extracted by the localization.
The relationship between guided points is explored by the graph pose refinement module to get more accurate localization results.
arXiv Detail & Related papers (2020-07-21T04:59:15Z) - Voxel Map for Visual SLAM [57.07800982410967]
We propose a voxel-map representation to efficiently map points for visual SLAM.
Our method is geometrically guaranteed to fall in the camera field-of-view, and occluded points can be identified and removed to a certain extend.
Experimental results show that our voxel map representation is as efficient as a map with 5s and provides significantly higher localization accuracy (average 46% improvement in RMSE) on the EuRoC dataset.
arXiv Detail & Related papers (2020-03-04T18:39:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.