Hardware-accelerated Mars Sample Localization via deep transfer learning
from photorealistic simulations
- URL: http://arxiv.org/abs/2206.02622v1
- Date: Mon, 6 Jun 2022 14:05:25 GMT
- Title: Hardware-accelerated Mars Sample Localization via deep transfer learning
from photorealistic simulations
- Authors: Ra\'ul Castilla-Arquillo, Carlos Jes\'us P\'erez-del-Pulgar, Gonzalo
Jes\'us Paz-Delgado and Levin Gerdes
- Abstract summary: The goal of the Mars Sample Return campaign is to collect soil samples from the surface of Mars and return them to Earth for further study.
It is expected the Sample Fetch Rover will be in charge of localizing and gathering up to 35 sample tubes over 150 Martian sols.
This work proposes a novel approach for the autonomous detection and pose estimation of the sample tubes.
- Score: 1.3075880857448061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of the Mars Sample Return campaign is to collect soil samples from
the surface of Mars and return them to Earth for further study. The samples
will be acquired and stored in metal tubes by the Perseverance rover and
deposited on the Martian surface. As part of this campaign, it is expected the
Sample Fetch Rover will be in charge of localizing and gathering up to 35
sample tubes over 150 Martian sols. Autonomous capabilities are critical for
the success of the overall campaign and for the Sample Fetch Rover in
particular. This work proposes a novel approach for the autonomous detection
and pose estimation of the sample tubes. For the detection stage, a Deep Neural
Network and transfer learning from a synthetic dataset are proposed. The
dataset is created from photorealistic 3D simulations of Martian scenarios.
Additionally, Computer Vision techniques are used to estimate the detected
sample tubes poses. Finally, laboratory tests of the Sample Localization
procedure are performed using the ExoMars Testing Rover on a Mars-like testbed.
These tests validate the proposed approach in different hardware architectures,
providing promising results related to the sample detection and pose
estimation.
Related papers
- ConeQuest: A Benchmark for Cone Segmentation on Mars [9.036303895516745]
ConeQuest is the first expert-annotated public dataset to identify cones on Mars.
We propose two benchmark tasks using ConeQuest: (i) Spatial Generalization and (ii) Cone-size Generalization.
arXiv Detail & Related papers (2023-11-15T02:33:08Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - MaRF: Representing Mars as Neural Radiance Fields [1.4680035572775534]
MaRF is a framework able to synthesize the Martian environment using several collections of images from rover cameras.
It addresses key challenges in planetary surface exploration such as: planetary geology, simulated navigation and shape analysis.
In the experimental section, we demonstrate the environments created from actual Mars datasets captured by Curiosity rover, Perseverance rover and Ingenuity helicopter.
arXiv Detail & Related papers (2022-12-03T18:58:00Z) - Mars Rover Localization Based on A2G Obstacle Distribution Pattern
Matching [0.0]
In NASA's Mars 2020 mission, the Ingenuity helicopter is carried together with the rover.
Traditional image matching methods will struggle to obtain valid image correspondence.
An algorithm combing image-based rock detection and rock distribution pattern matching is used to acquire A2G imagery correspondence.
arXiv Detail & Related papers (2022-10-07T08:29:48Z) - Mixed-domain Training Improves Multi-Mission Terrain Segmentation [0.9566312408744931]
Current Martian terrain segmentation models require retraining for deployment across different domains.
This research proposes a semi-supervised learning approach that leverages unsupervised contrastive pretraining of a backbone for a multi-mission semantic segmentation for Martian surfaces.
arXiv Detail & Related papers (2022-09-27T20:25:24Z) - 6D Camera Relocalization in Visually Ambiguous Extreme Environments [79.68352435957266]
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
arXiv Detail & Related papers (2022-07-13T16:40:02Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Machine Vision based Sample-Tube Localization for Mars Sample Return [3.548901442158138]
A potential Mars Sample Return (MSR) architecture is being jointly studied by NASA and ESA.
In this paper, we focus on the fetch part of the MSR, and more specifically the problem of autonomously detecting and localizing sample tubes deposited on the Martian surface.
We study two machine-vision based approaches: First, a geometry-driven approach based on template matching that uses hard-coded filters and a 3D shape model of the tube; and second, a data-driven approach based on convolutional neural networks (CNNs) and learned features.
arXiv Detail & Related papers (2021-03-17T23:09:28Z) - Rover Relocalization for Mars Sample Return by Virtual Template
Synthesis and Matching [48.0956967976633]
We consider the problem of rover relocalization in the context of the notional Mars Sample Return campaign.
In this campaign, a rover (R1) needs to be capable of autonomously navigating and localizing itself within an area of approximately 50 x 50 m.
We propose a visual localizer that exhibits robustness to the relatively barren terrain that we expect to find in relevant areas.
arXiv Detail & Related papers (2021-03-05T00:18:33Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.