IR-MCL: Implicit Representation-Based Online Global Localization
- URL: http://arxiv.org/abs/2210.03113v1
- Date: Thu, 6 Oct 2022 17:59:08 GMT
- Title: IR-MCL: Implicit Representation-Based Online Global Localization
- Authors: Haofei Kuang, Xieyuanli Chen, Tiziano Guadagnino, Nicky Zimmerman,
Jens Behley and Cyrill Stachniss
- Abstract summary: In this paper, we address the problem of estimating the robots pose in an indoor environment using 2D LiDAR data.
We propose a neural occupancy field (NOF) to implicitly represent the scene using a neural network.
We show that we can accurately and efficiently localize a robot using our approach surpassing the localization performance of state-of-the-art methods.
- Score: 31.77645160411745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Determining the state of a mobile robot is an essential building block of
robot navigation systems. In this paper, we address the problem of estimating
the robots pose in an indoor environment using 2D LiDAR data and investigate
how modern environment models can improve gold standard Monte-Carlo
localization (MCL) systems. We propose a neural occupancy field (NOF) to
implicitly represent the scene using a neural network. With the pretrained
network, we can synthesize 2D LiDAR scans for an arbitrary robot pose through
volume rendering. Based on the implicit representation, we can obtain the
similarity between a synthesized and actual scan as an observation model and
integrate it into an MCL system to perform accurate localization. We evaluate
our approach on five sequences of a self-recorded dataset and three publicly
available datasets. We show that we can accurately and efficiently localize a
robot using our approach surpassing the localization performance of
state-of-the-art methods. The experiments suggest that the presented implicit
representation is able to predict more accurate 2D LiDAR scans leading to an
improved observation model for our particle filter-based localization. The code
of our approach is released at: https://github.com/PRBonn/ir-mcl.
Related papers
- The Oxford Spires Dataset: Benchmarking Large-Scale LiDAR-Visual Localisation, Reconstruction and Radiance Field Methods [10.265865092323041]
This paper introduces a large-scale multi-modal dataset captured in and around well-known landmarks in Oxford.
We also establish benchmarks for tasks involving localisation, reconstruction, and novel-view synthesis.
Our dataset and benchmarks are intended to facilitate better integration of radiance field methods and SLAM systems.
arXiv Detail & Related papers (2024-11-15T19:43:24Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Kimera-Multi: a System for Distributed Multi-Robot Metric-Semantic
Simultaneous Localization and Mapping [57.173793973480656]
We present the first fully distributed multi-robot system for dense metric-semantic SLAM.
Our system, dubbed Kimera-Multi, is implemented by a team of robots equipped with visual-inertial sensors.
Kimera-Multi builds a 3D mesh model of the environment in real-time, where each face of the mesh is annotated with a semantic label.
arXiv Detail & Related papers (2020-11-08T21:38:12Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.