Lunar Terrain Relative Navigation Using a Convolutional Neural Network
for Visual Crater Detection
- URL: http://arxiv.org/abs/2007.07702v1
- Date: Wed, 15 Jul 2020 14:19:27 GMT
- Title: Lunar Terrain Relative Navigation Using a Convolutional Neural Network
for Visual Crater Detection
- Authors: Lena M. Downes, Ted J. Steiner, Jonathan P. How
- Abstract summary: This paper presents a system that uses a convolutional neural network (CNN) and image processing methods to track the location of a simulated spacecraft.
The CNN, called LunaNet, visually detects craters in the simulated camera frame and those detections are matched to known lunar craters in the region of the current estimated spacecraft position.
- Score: 39.20073801639923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Terrain relative navigation can improve the precision of a spacecraft's
position estimate by detecting global features that act as supplementary
measurements to correct for drift in the inertial navigation system. This paper
presents a system that uses a convolutional neural network (CNN) and image
processing methods to track the location of a simulated spacecraft with an
extended Kalman filter (EKF). The CNN, called LunaNet, visually detects craters
in the simulated camera frame and those detections are matched to known lunar
craters in the region of the current estimated spacecraft position. These
matched craters are treated as features that are tracked using the EKF. LunaNet
enables more reliable position tracking over a simulated trajectory due to its
greater robustness to changes in image brightness and more repeatable crater
detections from frame to frame throughout a trajectory. LunaNet combined with
an EKF produces a decrease of 60% in the average final position estimation
error and a decrease of 25% in average final velocity estimation error compared
to an EKF using an image processing-based crater detection method when tested
on trajectories using images of standard brightness.
Related papers
- Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Boosting 3-DoF Ground-to-Satellite Camera Localization Accuracy via
Geometry-Guided Cross-View Transformer [66.82008165644892]
We propose a method to increase the accuracy of a ground camera's location and orientation by estimating the relative rotation and translation between the ground-level image and its matched/retrieved satellite image.
Experimental results demonstrate that our method significantly outperforms the state-of-the-art.
arXiv Detail & Related papers (2023-07-16T11:52:27Z) - SU-Net: Pose estimation network for non-cooperative spacecraft on-orbit [8.671030148920009]
Spacecraft pose estimation plays a vital role in many on-orbit space missions, such as rendezvous and docking, debris removal, and on-orbit maintenance.
We analyze the radar image characteristics of spacecraft on-orbit, then propose a new deep learning neural Network structure named Dense Residual U-shaped Network (DR-U-Net) to extract image features.
We further introduce a novel neural network based on DR-U-Net, namely Spacecraft U-shaped Network (SU-Net) to achieve end-to-end pose estimation for non-cooperative spacecraft.
arXiv Detail & Related papers (2023-02-21T11:14:01Z) - Globally Optimal Event-Based Divergence Estimation for Ventral Landing [55.29096494880328]
Event sensing is a major component in bio-inspired flight guidance and control systems.
We explore the usage of event cameras for predicting time-to-contact with the surface during ventral landing.
This is achieved by estimating divergence (inverse TTC), which is the rate of radial optic flow, from the event stream generated during landing.
Our core contributions are a novel contrast maximisation formulation for event-based divergence estimation, and a branch-and-bound algorithm to exactly maximise contrast and find the optimal divergence value.
arXiv Detail & Related papers (2022-09-27T06:00:52Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Autonomous crater detection on asteroids using a fully-convolutional
neural network [1.3750624267664155]
This paper shows the application of autonomous Crater Detection using the U-Net, a Fully-Convolutional Neural Network, on Ceres.
The U-Net is trained on optical images of the Moon Global Morphology Mosaic based on data collected by the LRO and manual crater catalogues.
The trained model has been fine-tuned using 100, 500 and 1000 additional images of Ceres.
arXiv Detail & Related papers (2022-04-01T14:34:11Z) - Lunar Rover Localization Using Craters as Landmarks [7.097834331171584]
We present an approach to crater-based lunar rover localization using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
This paper presents initial results on crater detection using 3D point cloud data from onboard lidar or stereo cameras, as well as using shading cues in monocular onboard imagery.
arXiv Detail & Related papers (2022-03-18T17:38:52Z) - A Novel CNN-based Method for Accurate Ship Detection in HR Optical
Remote Sensing Images via Rotated Bounding Box [10.689750889854269]
A novel CNN-based ship detection method is proposed, by overcoming some common deficiencies of current CNN-based methods in ship detection.
We are able to predict the orientation and other variables independently, and yet more effectively, with a novel dual-branch regression network.
Experimental results demonstrate the great superiority of the proposed method in ship detection.
arXiv Detail & Related papers (2020-04-15T14:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.