Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions
- URL: http://arxiv.org/abs/2109.05509v1
- Date: Sun, 12 Sep 2021 12:52:20 GMT
- Title: Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions
- Authors: Martin Wudenka and Marcus G. M\"uller and Nikolaus Demmel and Armin
Wedler and Rudolph Triebel and Daniel Cremers and Wolfgang St\"urzl
- Abstract summary: Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
- Score: 49.79068659889639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the future, extraterrestrial expeditions will not only be conducted by
rovers but also by flying robots. The technical demonstration drone Ingenuity,
that just landed on Mars, will mark the beginning of a new era of exploration
unhindered by terrain traversability. Robust self-localization is crucial for
that. Cameras that are lightweight, cheap and information-rich sensors are
already used to estimate the ego-motion of vehicles. However, methods proven to
work in man-made environments cannot simply be deployed on other planets. The
highly repetitive textures present in the wastelands of Mars pose a huge
challenge to descriptor matching based approaches.
In this paper, we present an advanced robust monocular odometry algorithm
that uses efficient optical flow tracking to obtain feature correspondences
between images and a refined keyframe selection criterion. In contrast to most
other approaches, our framework can also handle rotation-only motions that are
particularly challenging for monocular odometry systems. Furthermore, we
present a novel approach to estimate the current risk of scale drift based on a
principal component analysis of the relative translation information matrix.
This way we obtain an implicit measure of uncertainty. We evaluate the validity
of our approach on all sequences of a challenging real-world dataset captured
in a Mars-like environment and show that it outperforms state-of-the-art
approaches.
Related papers
- MARs: Multi-view Attention Regularizations for Patch-based Feature Recognition of Space Terrain [4.87717454493713]
Current approaches rely on template matching with pre-gathered patch-based features.
We introduce Multi-view Attention Regularizations (MARs) to constrain the channel and spatial attention across multiple feature views.
We demonstrate improved terrain-feature recognition performance by upwards of 85%.
arXiv Detail & Related papers (2024-10-07T16:41:45Z) - Structure-Invariant Range-Visual-Inertial Odometry [17.47284320862407]
This work introduces a novel range-visual-inertial odometry system tailored for the Mars Science Helicopter mission.
Our system extends the state-of-the-art xVIO framework by fusing consistent range information with visual and inertial measurements.
We demonstrate that our range-VIO approach estimates terrain-relative velocity meeting the stringent mission requirements.
arXiv Detail & Related papers (2024-09-06T21:49:10Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - 6D Camera Relocalization in Visually Ambiguous Extreme Environments [79.68352435957266]
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
arXiv Detail & Related papers (2022-07-13T16:40:02Z) - A Neuromorphic Vision-Based Measurement for Robust Relative Localization
in Future Space Exploration Missions [0.0]
This work proposes a robust relative localization system based on a fusion of neuromorphic vision-based measurements (NVBMs) and inertial measurements.
The proposed system was tested in a variety of experiments and has outperformed state-of-the-art approaches in accuracy and range.
arXiv Detail & Related papers (2022-06-23T08:39:05Z) - Exploring Event Camera-based Odometry for Planetary Robots [39.46226359115717]
Event cameras are poised to become enabling sensors for vision-based exploration on future Mars helicopter missions.
Existing event-based visual-inertial odometry (VIO) algorithms either suffer from high tracking errors or are brittle.
We introduce EKLT-VIO, which addresses both limitations by combining a state-of-the-art event-based with a filter-based backend.
arXiv Detail & Related papers (2022-04-12T15:19:50Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - Latent World Models For Intrinsically Motivated Exploration [140.21871701134626]
We present a self-supervised representation learning method for image-based observations.
We consider episodic and life-long uncertainties to guide the exploration of partially observable environments.
arXiv Detail & Related papers (2020-10-05T19:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.