Rover Relocalization for Mars Sample Return by Virtual Template
Synthesis and Matching
- URL: http://arxiv.org/abs/2103.03395v1
- Date: Fri, 5 Mar 2021 00:18:33 GMT
- Title: Rover Relocalization for Mars Sample Return by Virtual Template
Synthesis and Matching
- Authors: Tu-Hoa Pham, William Seto, Shreyansh Daftry, Barry Ridge, Johanna
Hansen, Tristan Thrush, Mark Van der Merwe, Gerard Maggiolino, Alexander
Brinkman, John Mayo, Yang Cheng, Curtis Padgett, Eric Kulczycki, Renaud Detry
- Abstract summary: We consider the problem of rover relocalization in the context of the notional Mars Sample Return campaign.
In this campaign, a rover (R1) needs to be capable of autonomously navigating and localizing itself within an area of approximately 50 x 50 m.
We propose a visual localizer that exhibits robustness to the relatively barren terrain that we expect to find in relevant areas.
- Score: 48.0956967976633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of rover relocalization in the context of the
notional Mars Sample Return campaign. In this campaign, a rover (R1) needs to
be capable of autonomously navigating and localizing itself within an area of
approximately 50 x 50 m using reference images collected years earlier by
another rover (R0). We propose a visual localizer that exhibits robustness to
the relatively barren terrain that we expect to find in relevant areas, and to
large lighting and viewpoint differences between R0 and R1. The localizer
synthesizes partial renderings of a mesh built from reference R0 images and
matches those to R1 images. We evaluate our method on a dataset totaling 2160
images covering the range of expected environmental conditions (terrain,
lighting, approach angle). Experimental results show the effectiveness of our
approach. This work informs the Mars Sample Return campaign on the choice of a
site where Perseverance (R0) will place a set of sample tubes for future
retrieval by another rover (R1).
Related papers
- Boosting 3-DoF Ground-to-Satellite Camera Localization Accuracy via
Geometry-Guided Cross-View Transformer [66.82008165644892]
We propose a method to increase the accuracy of a ground camera's location and orientation by estimating the relative rotation and translation between the ground-level image and its matched/retrieved satellite image.
Experimental results demonstrate that our method significantly outperforms the state-of-the-art.
arXiv Detail & Related papers (2023-07-16T11:52:27Z) - Mars Rover Localization Based on A2G Obstacle Distribution Pattern
Matching [0.0]
In NASA's Mars 2020 mission, the Ingenuity helicopter is carried together with the rover.
Traditional image matching methods will struggle to obtain valid image correspondence.
An algorithm combing image-based rock detection and rock distribution pattern matching is used to acquire A2G imagery correspondence.
arXiv Detail & Related papers (2022-10-07T08:29:48Z) - Visual Cross-View Metric Localization with Dense Uncertainty Estimates [11.76638109321532]
This work addresses visual cross-view metric localization for outdoor robotics.
Given a ground-level color image and a satellite patch that contains the local surroundings, the task is to identify the location of the ground camera within the satellite patch.
We devise a novel network architecture with denser satellite descriptors, similarity matching at the bottleneck, and a dense spatial distribution as output to capture multi-modal localization ambiguities.
arXiv Detail & Related papers (2022-08-17T20:12:23Z) - CroCo: Cross-Modal Contrastive learning for localization of Earth
Observation data [62.96337162094726]
It is of interest to localize a ground-based LiDAR point cloud on remote sensing imagery.
We propose a contrastive learning-based method that trains on DEM and high-resolution optical imagery.
In the best scenario, the Top-1 score of 0.71 and Top-5 score of 0.81 are obtained.
arXiv Detail & Related papers (2022-04-14T15:55:00Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Machine Vision based Sample-Tube Localization for Mars Sample Return [3.548901442158138]
A potential Mars Sample Return (MSR) architecture is being jointly studied by NASA and ESA.
In this paper, we focus on the fetch part of the MSR, and more specifically the problem of autonomously detecting and localizing sample tubes deposited on the Martian surface.
We study two machine-vision based approaches: First, a geometry-driven approach based on template matching that uses hard-coded filters and a 3D shape model of the tube; and second, a data-driven approach based on convolutional neural networks (CNNs) and learned features.
arXiv Detail & Related papers (2021-03-17T23:09:28Z) - Latent World Models For Intrinsically Motivated Exploration [140.21871701134626]
We present a self-supervised representation learning method for image-based observations.
We consider episodic and life-long uncertainties to guide the exploration of partially observable environments.
arXiv Detail & Related papers (2020-10-05T19:47:04Z) - Towards Image-based Automatic Meter Reading in Unconstrained Scenarios:
A Robust and Efficient Approach [60.63996472100845]
We present an end-to-end approach for Automatic Meter Reading (AMR) focusing on unconstrained scenarios.
Our main contribution is the insertion of a new stage in the AMR pipeline, called corner detection and counter classification.
We show that our AMR system achieves impressive recognition rates (i.e., > 99%) when rejecting readings made with lower confidence values.
arXiv Detail & Related papers (2020-09-21T21:21:23Z) - RGB2LIDAR: Towards Solving Large-Scale Cross-Modal Visual Localization [20.350871370274238]
We study an important, yet largely unexplored problem of large-scale cross-modal visual localization.
We introduce a new dataset containing over 550K pairs of RGB and aerial LIDAR depth images.
We propose a novel joint embedding based method that effectively combines the appearance and semantic cues from both modalities.
arXiv Detail & Related papers (2020-09-12T01:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.