ShadowNav: Autonomous Global Localization for Lunar Navigation in Darkness
- URL: http://arxiv.org/abs/2405.01673v3
- Date: Sat, 14 Sep 2024 00:27:16 GMT
- Title: ShadowNav: Autonomous Global Localization for Lunar Navigation in Darkness
- Authors: Deegan Atha, R. Michael Swan, Abhishek Cauligi, Anne Bettens, Edwin Goh, Dima Kogan, Larry Matthies, Masahiro Ono,
- Abstract summary: We present ShadowNav, an autonomous approach for global localization on the Moon with an emphasis on driving in darkness and at nighttime.
Our approach uses the leading edge of Lunar craters as landmarks and a particle filtering approach is used to associate detected craters with known ones on an offboard map.
We demonstrate the efficacy of our proposed approach in both a Lunar simulation environment and on data collected during a field test at Cinder Lakes, Arizona.
- Score: 4.200882007630191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to determine the pose of a rover in an inertial frame autonomously is a crucial capability necessary for the next generation of surface rover missions on other planetary bodies. Currently, most on-going rover missions utilize ground-in-the-loop interventions to manually correct for drift in the pose estimate and this human supervision bottlenecks the distance over which rovers can operate autonomously and carry out scientific measurements. In this paper, we present ShadowNav, an autonomous approach for global localization on the Moon with an emphasis on driving in darkness and at nighttime. Our approach uses the leading edge of Lunar craters as landmarks and a particle filtering approach is used to associate detected craters with known ones on an offboard map. We discuss the key design decisions in developing the ShadowNav framework for use with a Lunar rover concept equipped with a stereo camera and an external illumination source. Finally, we demonstrate the efficacy of our proposed approach in both a Lunar simulation environment and on data collected during a field test at Cinder Lakes, Arizona.
Related papers
- Explainable Convolutional Networks for Crater Detection and Lunar Landing Navigation [3.1748489631597887]
This paper aims at providing transparent and understandable predictions for intelligent lunar landing.
Attention-based Darknet53 is proposed as the feature extraction structure.
For crater detection and navigation tasks, attention-based YOLOv3 and attention-Darknet53-LSTM are presented.
arXiv Detail & Related papers (2024-08-24T14:17:30Z) - A Bionic Data-driven Approach for Long-distance Underwater Navigation with Anomaly Resistance [59.21686775951903]
Various animals exhibit accurate navigation using environment cues.
Inspired by animal navigation, this work proposes a bionic and data-driven approach for long-distance underwater navigation.
The proposed approach uses measured geomagnetic data for the navigation, and requires no GPS systems or geographical maps.
arXiv Detail & Related papers (2024-02-06T13:20:56Z) - An Autonomous Vision-Based Algorithm for Interplanetary Navigation [0.0]
Vision-based navigation algorithm is built by combining an orbit determination method with an image processing pipeline.
A novel analytical measurement model is developed providing a first-order approximation of the light-aberration and light-time effects.
Algorithm performance is tested on a high-fidelity, Earth--Mars interplanetary transfer.
arXiv Detail & Related papers (2023-09-18T08:54:29Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - ShadowNav: Crater-Based Localization for Nighttime and Permanently
Shadowed Region Lunar Navigation [4.521278242509125]
We present a method of absolute localization that utilizes craters as landmarks and matches detected crater edges on the surface with known craters in orbital maps.
We demonstrate that this technique shows promise for maintaining absolute localization error of less than 10m required for most planetary rover missions.
arXiv Detail & Related papers (2023-01-11T18:35:31Z) - LunarNav: Crater-based Localization for Long-range Autonomous Lunar
Rover Navigation [8.336210810008282]
Artemis program requires robotic and crewed lunar rovers for resource prospecting and exploitation.
LunarNav project aims to enable lunar rovers to estimate their global position and heading on the Moon with a goal performance of position error less than 5 meters (m)
This will be achieved autonomously onboard by detecting craters in the vicinity of the rover and matching them to a database of known craters mapped from orbit.
arXiv Detail & Related papers (2023-01-03T20:46:27Z) - Lunar Rover Localization Using Craters as Landmarks [7.097834331171584]
We present an approach to crater-based lunar rover localization using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
This paper presents initial results on crater detection using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
arXiv Detail & Related papers (2022-03-18T17:38:52Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.