MarsLGPR: Mars Rover Localization with Ground Penetrating Radar
- URL: http://arxiv.org/abs/2503.04944v1
- Date: Thu, 06 Mar 2025 20:19:21 GMT
- Title: MarsLGPR: Mars Rover Localization with Ground Penetrating Radar
- Authors: Anja Sheppard, Katherine A. Skinner,
- Abstract summary: We propose the use of Ground Penetrating Radar (GPR) for rover localization on Mars.<n>We develop a novel GPR-based deep learning model that predicts 1D relative pose translation.<n>We perform experiments in a Mars analog environment and demonstrate that our GPR-based displacement predictions both outperform wheel encoders.
- Score: 2.1843439591862333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose the use of Ground Penetrating Radar (GPR) for rover localization on Mars. Precise pose estimation is an important task for mobile robots exploring planetary surfaces, as they operate in GPS-denied environments. Although visual odometry provides accurate localization, it is computationally expensive and can fail in dim or high-contrast lighting. Wheel encoders can also provide odometry estimation, but are prone to slipping on the sandy terrain encountered on Mars. Although traditionally a scientific surveying sensor, GPR has been used on Earth for terrain classification and localization through subsurface feature matching. The Perseverance rover and the upcoming ExoMars rover have GPR sensors already equipped to aid in the search of water and mineral resources. We propose to leverage GPR to aid in Mars rover localization. Specifically, we develop a novel GPR-based deep learning model that predicts 1D relative pose translation. We fuse our GPR pose prediction method with inertial and wheel encoder data in a filtering framework to output rover localization. We perform experiments in a Mars analog environment and demonstrate that our GPR-based displacement predictions both outperform wheel encoders and improve multi-modal filtering estimates in high-slip environments. Lastly, we present the first dataset aimed at GPR-based localization in Mars analog environments, which will be made publicly available upon publication.
Related papers
- Unified Human Localization and Trajectory Prediction with Monocular Vision [64.19384064365431]
MonoTransmotion is a Transformer-based framework that uses only a monocular camera to jointly solve localization and prediction tasks.
We show that by jointly training both tasks with our unified framework, our method is more robust in real-world scenarios made of noisy inputs.
arXiv Detail & Related papers (2025-03-05T14:18:39Z) - Vision-based Geo-Localization of Future Mars Rotorcraft in Challenging Illumination Conditions [4.6901215692204286]
Geo-LoFTR is a geometry-aided deep learning model for image registration that is more robust under large illumination differences than prior models.<n>We show that our proposed system outperforms prior MbL efforts in terms of localization accuracy under significant lighting and scale variations.
arXiv Detail & Related papers (2025-02-13T22:10:21Z) - Investigating the Capabilities of Deep Learning for Processing and Interpreting One-Shot Multi-offset GPR Data: A Numerical Case Study for Lunar and Martian Environments [9.150932930653921]
Ground-penetrating radar (GPR) is a mature geophysical method that has gained increasing popularity in planetary science over the past decade.
GPR has been utilised both for Lunar and Martian missions providing pivotal information regarding the near surface geology of Terrestrial planets.
This paper investigates the potential of deep learning for interpreting and processing GPR data.
arXiv Detail & Related papers (2024-10-18T11:38:29Z) - Energy-Based Models for Cross-Modal Localization using Convolutional
Transformers [52.27061799824835]
We present a novel framework for localizing a ground vehicle mounted with a range sensor against satellite imagery in the absence of GPS.
We propose a method using convolutional transformers that performs accurate metric-level localization in a cross-modal manner.
We train our model end-to-end and demonstrate our approach achieving higher accuracy than the state-of-the-art on KITTI, Pandaset, and a custom dataset.
arXiv Detail & Related papers (2023-06-06T21:27:08Z) - Environmental Sensor Placement with Convolutional Gaussian Neural
Processes [65.13973319334625]
It is challenging to place sensors in a way that maximises the informativeness of their measurements, particularly in remote regions like Antarctica.
Probabilistic machine learning models can suggest informative sensor placements by finding sites that maximally reduce prediction uncertainty.
This paper proposes using a convolutional Gaussian neural process (ConvGNP) to address these issues.
arXiv Detail & Related papers (2022-11-18T17:25:14Z) - LaMAR: Benchmarking Localization and Mapping for Augmented Reality [80.23361950062302]
We introduce LaMAR, a new benchmark with a comprehensive capture and GT pipeline that co-registers realistic trajectories and sensor streams captured by heterogeneous AR devices.
We publish a benchmark dataset of diverse and large-scale scenes recorded with head-mounted and hand-held AR devices.
arXiv Detail & Related papers (2022-10-19T17:58:17Z) - Mars Rover Localization Based on A2G Obstacle Distribution Pattern
Matching [0.0]
In NASA's Mars 2020 mission, the Ingenuity helicopter is carried together with the rover.
Traditional image matching methods will struggle to obtain valid image correspondence.
An algorithm combing image-based rock detection and rock distribution pattern matching is used to acquire A2G imagery correspondence.
arXiv Detail & Related papers (2022-10-07T08:29:48Z) - Visual Cross-View Metric Localization with Dense Uncertainty Estimates [11.76638109321532]
This work addresses visual cross-view metric localization for outdoor robotics.
Given a ground-level color image and a satellite patch that contains the local surroundings, the task is to identify the location of the ground camera within the satellite patch.
We devise a novel network architecture with denser satellite descriptors, similarity matching at the bottleneck, and a dense spatial distribution as output to capture multi-modal localization ambiguities.
arXiv Detail & Related papers (2022-08-17T20:12:23Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Robotic Inspection and 3D GPR-based Reconstruction for Underground
Utilities [11.601407791322327]
Ground Penetrating Radar (GPR) is an effective non-destructive evaluation (NDE) device for inspecting and surveying subsurface objects.
The current practice for GPR data collection requires a human inspector to move a GPR cart along pre-marked grid lines.
This paper presents a novel robotic system to collect GPR data, interpret GPR data, localize the underground utilities, reconstruct and visualize the underground objects' dense point cloud model.
arXiv Detail & Related papers (2021-06-03T14:58:49Z) - Rover Relocalization for Mars Sample Return by Virtual Template
Synthesis and Matching [48.0956967976633]
We consider the problem of rover relocalization in the context of the notional Mars Sample Return campaign.
In this campaign, a rover (R1) needs to be capable of autonomously navigating and localizing itself within an area of approximately 50 x 50 m.
We propose a visual localizer that exhibits robustness to the relatively barren terrain that we expect to find in relevant areas.
arXiv Detail & Related papers (2021-03-05T00:18:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.