Vision-based Geo-Localization of Future Mars Rotorcraft in Challenging Illumination Conditions
- URL: http://arxiv.org/abs/2502.09795v1
- Date: Thu, 13 Feb 2025 22:10:21 GMT
- Title: Vision-based Geo-Localization of Future Mars Rotorcraft in Challenging Illumination Conditions
- Authors: Dario Pisanti, Robert Hewitt, Roland Brockers, Georgios Georgakis,
- Abstract summary: Geo-LoFTR is a geometry-aided deep learning model for image registration that is more robust under large illumination differences than prior models.
We show that our proposed system outperforms prior MbL efforts in terms of localization accuracy under significant lighting and scale variations.
- Score: 4.6901215692204286
- License:
- Abstract: Planetary exploration using aerial assets has the potential for unprecedented scientific discoveries on Mars. While NASA's Mars helicopter Ingenuity proved flight in Martian atmosphere is possible, future Mars rotocrafts will require advanced navigation capabilities for long-range flights. One such critical capability is Map-based Localization (MbL) which registers an onboard image to a reference map during flight in order to mitigate cumulative drift from visual odometry. However, significant illumination differences between rotocraft observations and a reference map prove challenging for traditional MbL systems, restricting the operational window of the vehicle. In this work, we investigate a new MbL system and propose Geo-LoFTR, a geometry-aided deep learning model for image registration that is more robust under large illumination differences than prior models. The system is supported by a custom simulation framework that uses real orbital maps to produce large amounts of realistic images of the Martian terrain. Comprehensive evaluations show that our proposed system outperforms prior MbL efforts in terms of localization accuracy under significant lighting and scale variations. Furthermore, we demonstrate the validity of our approach across a simulated Martian day.
Related papers
- Structure-Invariant Range-Visual-Inertial Odometry [17.47284320862407]
This work introduces a novel range-visual-inertial odometry system tailored for the Mars Science Helicopter mission.
Our system extends the state-of-the-art xVIO framework by fusing consistent range information with visual and inertial measurements.
We demonstrate that our range-VIO approach estimates terrain-relative velocity meeting the stringent mission requirements.
arXiv Detail & Related papers (2024-09-06T21:49:10Z) - MaRF: Representing Mars as Neural Radiance Fields [1.4680035572775534]
MaRF is a framework able to synthesize the Martian environment using several collections of images from rover cameras.
It addresses key challenges in planetary surface exploration such as: planetary geology, simulated navigation and shape analysis.
In the experimental section, we demonstrate the environments created from actual Mars datasets captured by Curiosity rover, Perseverance rover and Ingenuity helicopter.
arXiv Detail & Related papers (2022-12-03T18:58:00Z) - Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [57.19891435386843]
We present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view.
Our model runs at 25 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
arXiv Detail & Related papers (2022-11-15T13:52:41Z) - LaMAR: Benchmarking Localization and Mapping for Augmented Reality [80.23361950062302]
We introduce LaMAR, a new benchmark with a comprehensive capture and GT pipeline that co-registers realistic trajectories and sensor streams captured by heterogeneous AR devices.
We publish a benchmark dataset of diverse and large-scale scenes recorded with head-mounted and hand-held AR devices.
arXiv Detail & Related papers (2022-10-19T17:58:17Z) - Mars Rover Localization Based on A2G Obstacle Distribution Pattern
Matching [0.0]
In NASA's Mars 2020 mission, the Ingenuity helicopter is carried together with the rover.
Traditional image matching methods will struggle to obtain valid image correspondence.
An algorithm combing image-based rock detection and rock distribution pattern matching is used to acquire A2G imagery correspondence.
arXiv Detail & Related papers (2022-10-07T08:29:48Z) - A Neuromorphic Vision-Based Measurement for Robust Relative Localization
in Future Space Exploration Missions [0.0]
This work proposes a robust relative localization system based on a fusion of neuromorphic vision-based measurements (NVBMs) and inertial measurements.
The proposed system was tested in a variety of experiments and has outperformed state-of-the-art approaches in accuracy and range.
arXiv Detail & Related papers (2022-06-23T08:39:05Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - OpenStreetMap: Challenges and Opportunities in Machine Learning and
Remote Sensing [66.23463054467653]
We present a review of recent methods based on machine learning to improve and use OpenStreetMap data.
We believe that OSM can change the way we interpret remote sensing data and that the synergy with machine learning can scale participatory map making.
arXiv Detail & Related papers (2020-07-13T09:58:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.