Solving Occlusion in Terrain Mapping with Neural Networks
- URL: http://arxiv.org/abs/2109.07150v1
- Date: Wed, 15 Sep 2021 08:30:16 GMT
- Title: Solving Occlusion in Terrain Mapping with Neural Networks
- Authors: Maximilian St\"olzle, Takahiro Miki, Levin Gerdes, Martin Azkarate,
and Marco Hutter
- Abstract summary: We introduce a self-supervised learning approach capable of training on real-world data without a need for ground-truth information.
Our neural network is able to run in real-time on both CPU and GPU with suitable sampling rates for autonomous ground robots.
- Score: 7.703348666813963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate and complete terrain maps enhance the awareness of autonomous robots
and enable safe and optimal path planning. Rocks and topography often create
occlusions and lead to missing elevation information in the Digital Elevation
Map (DEM). Currently, mostly traditional inpainting techniques based on
diffusion or patch-matching are used by autonomous mobile robots to fill-in
incomplete DEMs. These methods cannot leverage the high-level terrain
characteristics and the geometric constraints of line of sight we humans use
intuitively to predict occluded areas. We propose to use neural networks to
reconstruct the occluded areas in DEMs. We introduce a self-supervised learning
approach capable of training on real-world data without a need for ground-truth
information. We accomplish this by adding artificial occlusion to the
incomplete elevation maps constructed on a real robot by performing ray
casting. We first evaluate a supervised learning approach on synthetic data for
which we have the full ground-truth available and subsequently move to several
real-world datasets. These real-world datasets were recorded during autonomous
exploration of both structured and unstructured terrain with a legged robot,
and additionally in a planetary scenario on Lunar analogue terrain. We state a
significant improvement compared to the Telea and Navier-Stokes baseline
methods both on synthetic terrain and for the real-world datasets. Our neural
network is able to run in real-time on both CPU and GPU with suitable sampling
rates for autonomous ground robots.
Related papers
- Learning Humanoid Locomotion over Challenging Terrain [84.35038297708485]
We present a learning-based approach for blind humanoid locomotion capable of traversing challenging natural and man-made terrains.
Our model is first pre-trained on a dataset of flat-ground trajectories with sequence modeling, and then fine-tuned on uneven terrain using reinforcement learning.
We evaluate our model on a real humanoid robot across a variety of terrains, including rough, deformable, and sloped surfaces.
arXiv Detail & Related papers (2024-10-04T17:57:09Z) - Legged Robot State Estimation With Invariant Extended Kalman Filter
Using Neural Measurement Network [2.0405494347486197]
We develop a state estimation framework that integrates a neural measurement network (NMN) with an invariant extended Kalman filter.
Our approach significantly reduces position drift compared to the existing model-based state estimator.
arXiv Detail & Related papers (2024-02-01T06:06:59Z) - IR-MCL: Implicit Representation-Based Online Global Localization [31.77645160411745]
In this paper, we address the problem of estimating the robots pose in an indoor environment using 2D LiDAR data.
We propose a neural occupancy field (NOF) to implicitly represent the scene using a neural network.
We show that we can accurately and efficiently localize a robot using our approach surpassing the localization performance of state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T17:59:08Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Deep Learning Traversability Estimator for Mobile Robots in Unstructured
Environments [11.042142015353626]
We propose a deep learning framework, trained in an end-to-end fashion from elevation maps and trajectories, to estimate the occurrence of failure events.
We show that transferring and fine-tuning of an application-independent pre-trained model retains better performance than training uniquely on scarcely available real data.
arXiv Detail & Related papers (2021-05-23T13:49:05Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Learning Topometric Semantic Maps from Occupancy Grids [2.5234065536725963]
We propose a new approach for deriving such instance-based semantic maps purely from occupancy grids.
We employ a combination of deep learning techniques to detect, segment and extract door hypotheses from a random-sized map.
We evaluate our approach on several publicly available real-world data sets.
arXiv Detail & Related papers (2020-01-10T22:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.