DUFOMap: Efficient Dynamic Awareness Mapping
- URL: http://arxiv.org/abs/2403.01449v2
- Date: Fri, 12 Apr 2024 08:40:55 GMT
- Title: DUFOMap: Efficient Dynamic Awareness Mapping
- Authors: Daniel Duberg, Qingwen Zhang, MingKai Jia, Patric Jensfelt,
- Abstract summary: The dynamic nature of the real world is one of the main challenges in robotics.
Current solutions are often applied in post-processing, where parameter tuning allows the user to adjust the setting for a specific dataset.
We propose DUFOMap, a novel dynamic awareness mapping framework designed for efficient online processing.
- Score: 3.3580006471376205
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The dynamic nature of the real world is one of the main challenges in robotics. The first step in dealing with it is to detect which parts of the world are dynamic. A typical benchmark task is to create a map that contains only the static part of the world to support, for example, localization and planning. Current solutions are often applied in post-processing, where parameter tuning allows the user to adjust the setting for a specific dataset. In this paper, we propose DUFOMap, a novel dynamic awareness mapping framework designed for efficient online processing. Despite having the same parameter settings for all scenarios, it performs better or is on par with state-of-the-art methods. Ray casting is utilized to identify and classify fully observed empty regions. Since these regions have been observed empty, it follows that anything inside them at another time must be dynamic. Evaluation is carried out in various scenarios, including outdoor environments in KITTI and Argoverse 2, open areas on the KTH campus, and with different sensor types. DUFOMap outperforms the state of the art in terms of accuracy and computational efficiency. The source code, benchmarks, and links to the datasets utilized are provided. See https://kth-rpl.github.io/dufomap for more details.
Related papers
- GOTLoc: General Outdoor Text-based Localization Using Scene Graph Retrieval with OpenStreetMap [4.51019574688293]
We propose GOTLoc, a robust localization method capable of operating even in outdoor environments where GPS signals are unavailable.
The method achieves this robust localization by leveraging comparisons between scene graphs generated from text descriptions and maps.
Our results demonstrate that the proposed method achieves accuracy comparable to algorithms relying on point cloud maps.
arXiv Detail & Related papers (2025-01-15T04:51:10Z) - BeautyMap: Binary-Encoded Adaptable Ground Matrix for Dynamic Points Removal in Global Maps [15.124066060292593]
Existing dynamic removal methods fail to balance the performance in computational efficiency and accuracy.
We present BeautyMap to efficiently remove the dynamic points while retaining static features for high-fidelity global maps.
Our approach utilizes a binary-encoded matrix to efficiently extract the environment features.
arXiv Detail & Related papers (2024-05-12T13:48:18Z) - PRISM-TopoMap: Online Topological Mapping with Place Recognition and Scan Matching [42.74395278382559]
This paper introduces PRISM-TopoMap -- a topological mapping method that maintains a graph of locally aligned locations.
The proposed method involves original learnable multimodal place recognition paired with the scan matching pipeline for localization and loop closure.
We conduct a broad experimental evaluation of the suggested approach in a range of photo-realistic environments and on a real robot, and compare it to state of the art.
arXiv Detail & Related papers (2024-04-02T06:25:16Z) - Mapping High-level Semantic Regions in Indoor Environments without
Object Recognition [50.624970503498226]
The present work proposes a method for semantic region mapping via embodied navigation in indoor environments.
To enable region identification, the method uses a vision-to-language model to provide scene information for mapping.
By projecting egocentric scene understanding into the global frame, the proposed method generates a semantic map as a distribution over possible region labels at each location.
arXiv Detail & Related papers (2024-03-11T18:09:50Z) - Loopy-SLAM: Dense Neural SLAM with Loop Closures [53.11936461015725]
We introduce Loopy-SLAM that globally optimize poses and the dense 3D model.
We use frame-to-model tracking using a data-driven point-based submap generation method and trigger loop closures online by performing global place recognition.
Evaluation on the synthetic Replica and real-world TUM-RGBD and ScanNet datasets demonstrate competitive or superior performance in tracking, mapping, and rendering accuracy when compared to existing dense neural RGBD SLAM methods.
arXiv Detail & Related papers (2024-02-14T18:18:32Z) - Towards Real-World Visual Tracking with Temporal Contexts [64.7981374129495]
We propose a two-level framework (TCTrack) that can exploit temporal contexts efficiently.
Based on it, we propose a stronger version for real-world visual tracking, i.e., TCTrack++.
For feature extraction, we propose an attention-based temporally adaptive convolution to enhance the spatial features.
For similarity map refinement, we introduce an adaptive temporal transformer to encode the temporal knowledge efficiently.
arXiv Detail & Related papers (2023-08-20T17:59:40Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - Neural Map Prior for Autonomous Driving [17.198729798817094]
High-definition (HD) semantic maps are crucial in enabling autonomous vehicles to navigate urban environments.
Traditional method of creating offline HD maps involves labor-intensive manual annotation processes.
Recent studies have proposed an alternative approach that generates local maps using online sensor observations.
In this study, we propose Neural Map Prior (NMP), a neural representation of global maps.
arXiv Detail & Related papers (2023-04-17T17:58:40Z) - Weakly-Supervised Multi-Granularity Map Learning for Vision-and-Language
Navigation [87.52136927091712]
We address a practical yet challenging problem of training robot agents to navigate in an environment following a path described by some language instructions.
To achieve accurate and efficient navigation, it is critical to build a map that accurately represents both spatial location and the semantic information of the environment objects.
We propose a multi-granularity map, which contains both object fine-grained details (e.g., color, texture) and semantic classes, to represent objects more comprehensively.
arXiv Detail & Related papers (2022-10-14T04:23:27Z) - MAOMaps: A Photo-Realistic Benchmark For vSLAM and Map Merging Quality
Assessment [0.0]
We introduce a novel benchmark that is aimed at quantitatively evaluating the quality of vision-based simultaneous localization and mapping (vSLAM) and map merging algorithms.
The dataset is photo-realistic and provides both the localization and the map ground truth data.
To compare the vSLAM-built maps and the ground-truth ones we introduce a novel way to find correspondences between them that takes the SLAM context into account.
arXiv Detail & Related papers (2021-05-31T14:30:36Z) - Rethinking Localization Map: Towards Accurate Object Perception with
Self-Enhancement Maps [78.2581910688094]
This work introduces a novel self-enhancement method to harvest accurate object localization maps and object boundaries with only category labels as supervision.
In particular, the proposed Self-Enhancement Maps achieve the state-of-the-art localization accuracy of 54.88% on ILSVRC.
arXiv Detail & Related papers (2020-06-09T12:35:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.