ShadowNav: Crater-Based Localization for Nighttime and Permanently
Shadowed Region Lunar Navigation
- URL: http://arxiv.org/abs/2301.04630v1
- Date: Wed, 11 Jan 2023 18:35:31 GMT
- Title: ShadowNav: Crater-Based Localization for Nighttime and Permanently
Shadowed Region Lunar Navigation
- Authors: Abhishek Cauligi and R. Michael Swan and Hiro Ono and Shreyansh Daftry
and John Elliott and Larry Matthies and Deegan Atha
- Abstract summary: We present a method of absolute localization that utilizes craters as landmarks and matches detected crater edges on the surface with known craters in orbital maps.
We demonstrate that this technique shows promise for maintaining absolute localization error of less than 10m required for most planetary rover missions.
- Score: 4.521278242509125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been an increase in interest in missions that drive significantly
longer distances per day than what has currently been performed. Further, some
of these proposed missions require autonomous driving and absolute localization
in darkness. For example, the Endurance A mission proposes to drive 1200km of
its total traverse at night. The lack of natural light available during such
missions limits what can be used as visual landmarks and the range at which
landmarks can be observed. In order for planetary rovers to traverse long
ranges, onboard absolute localization is critical to the ability of the rover
to maintain its planned trajectory and avoid known hazardous regions.
Currently, to accomplish absolute localization, a ground in the loop (GITL)
operation is performed wherein a human operator matches local maps or images
from onboard with orbital images and maps. This GITL operation limits the
distance that can be driven in a day to a few hundred meters, which is the
distance that the rover can maintain acceptable localization error via relative
methods. Previous work has shown that using craters as landmarks is a promising
approach for performing absolute localization on the moon during the day. In
this work we present a method of absolute localization that utilizes craters as
landmarks and matches detected crater edges on the surface with known craters
in orbital maps. We focus on a localization method based on a perception system
which has an external illuminator and a stereo camera. We evaluate (1) both
monocular and stereo based surface crater edge detection techniques, (2)
methods of scoring the crater edge matches for optimal localization, and (3)
localization performance on simulated Lunar surface imagery at night. We
demonstrate that this technique shows promise for maintaining absolute
localization error of less than 10m required for most planetary rover missions.
Related papers
- Mahalanobis Distance-based Multi-view Optimal Transport for Multi-view Crowd Localization [50.69184586442379]
We propose a novel Mahalanobis distance-based multi-view optimal transport loss specifically designed for multi-view crowd localization.
Experiments demonstrate the advantage of the proposed method over density map-based or common Euclidean distance-based optimal transport loss.
arXiv Detail & Related papers (2024-09-03T09:10:51Z) - ShadowNav: Autonomous Global Localization for Lunar Navigation in Darkness [4.200882007630191]
We present ShadowNav, an autonomous approach for global localization on the Moon with an emphasis on driving in darkness and at nighttime.
Our approach uses the leading edge of Lunar craters as landmarks and a particle filtering approach is used to associate detected craters with known ones on an offboard map.
We demonstrate the efficacy of our proposed approach in both a Lunar simulation environment and on data collected during a field test at Cinder Lakes, Arizona.
arXiv Detail & Related papers (2024-05-02T18:59:53Z) - Boosting 3-DoF Ground-to-Satellite Camera Localization Accuracy via
Geometry-Guided Cross-View Transformer [66.82008165644892]
We propose a method to increase the accuracy of a ground camera's location and orientation by estimating the relative rotation and translation between the ground-level image and its matched/retrieved satellite image.
Experimental results demonstrate that our method significantly outperforms the state-of-the-art.
arXiv Detail & Related papers (2023-07-16T11:52:27Z) - An Image Processing Pipeline for Autonomous Deep-Space Optical
Navigation [0.0]
This paper proposes an innovative pipeline for unresolved beacon recognition and line-of-sight extraction from images for autonomous interplanetary navigation.
The developed algorithm exploits the k-vector method for the non-stellar object identification and statistical likelihood to detect whether any beacon projection is visible in the image.
arXiv Detail & Related papers (2023-02-14T09:06:21Z) - LunarNav: Crater-based Localization for Long-range Autonomous Lunar
Rover Navigation [8.336210810008282]
Artemis program requires robotic and crewed lunar rovers for resource prospecting and exploitation.
LunarNav project aims to enable lunar rovers to estimate their global position and heading on the Moon with a goal performance of position error less than 5 meters (m)
This will be achieved autonomously onboard by detecting craters in the vicinity of the rover and matching them to a database of known craters mapped from orbit.
arXiv Detail & Related papers (2023-01-03T20:46:27Z) - Visual Cross-View Metric Localization with Dense Uncertainty Estimates [11.76638109321532]
This work addresses visual cross-view metric localization for outdoor robotics.
Given a ground-level color image and a satellite patch that contains the local surroundings, the task is to identify the location of the ground camera within the satellite patch.
We devise a novel network architecture with denser satellite descriptors, similarity matching at the bottleneck, and a dense spatial distribution as output to capture multi-modal localization ambiguities.
arXiv Detail & Related papers (2022-08-17T20:12:23Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Lunar Rover Localization Using Craters as Landmarks [7.097834331171584]
We present an approach to crater-based lunar rover localization using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
This paper presents initial results on crater detection using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
arXiv Detail & Related papers (2022-03-18T17:38:52Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Learning to Localize Using a LiDAR Intensity Map [87.04427452634445]
We propose a real-time, calibration-agnostic and effective localization system for self-driving cars.
Our method learns to embed the online LiDAR sweeps and intensity map into a joint deep embedding space.
Our full system can operate in real-time at 15Hz while achieving centimeter level accuracy across different LiDAR sensors and environments.
arXiv Detail & Related papers (2020-12-20T11:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.