Detection and Initial Assessment of Lunar Landing Sites Using Neural
Networks
- URL: http://arxiv.org/abs/2207.11413v1
- Date: Sat, 23 Jul 2022 04:29:18 GMT
- Title: Detection and Initial Assessment of Lunar Landing Sites Using Neural
Networks
- Authors: Daniel Posada, Jarred Jordan, Angelica Radulovic, Lillian Hong,
Aryslan Malik, and Troy Henderson
- Abstract summary: This paper will focus on a passive autonomous hazard detection and avoidance sub-system to generate an initial assessment of possible landing regions for the guidance system.
The system uses a single camera and the MobileNetV2 neural network architecture to detect and discern between safe landing sites and hazards such as rocks, shadows, and craters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Robotic and human lunar landings are a focus of future NASA missions.
Precision landing capabilities are vital to guarantee the success of the
mission, and the safety of the lander and crew. During the approach to the
surface there are multiple challenges associated with Hazard Relative
Navigation to ensure safe landings. This paper will focus on a passive
autonomous hazard detection and avoidance sub-system to generate an initial
assessment of possible landing regions for the guidance system. The system uses
a single camera and the MobileNetV2 neural network architecture to detect and
discern between safe landing sites and hazards such as rocks, shadows, and
craters. Then a monocular structure from motion will recreate the surface to
provide slope and roughness analysis.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - Visual Environment Assessment for Safe Autonomous Quadrotor Landing [8.538463567092297]
We present a novel approach for detection and assessment of potential landing sites for safe quadrotor landing.
Our solution efficiently integrates 2D and 3D environmental information, eliminating the need for external aids such as GPS.
Our approach runs in real-time on quadrotors equipped with limited computational capabilities.
arXiv Detail & Related papers (2023-11-16T18:02:10Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - Deep Monocular Hazard Detection for Safe Small Body Landing [12.922946578413578]
Hazard detection and avoidance is a key technology for future robotic small body sample return and lander missions.
We propose a novel safety mapping paradigm that leverages deep semantic segmentation techniques to predict landing safety directly from a single monocular image.
We demonstrate precise and accurate safety mapping performance on real in-situ imagery of prospective sample sites from the OSIRIS-REx mission.
arXiv Detail & Related papers (2023-01-30T19:40:46Z) - The State of Aerial Surveillance: A Survey [62.198765910573556]
This paper provides a comprehensive overview of human-centric aerial surveillance tasks from a computer vision and pattern recognition perspective.
The main object of interest is humans, where single or multiple subjects are to be detected, identified, tracked, re-identified and have their behavior analyzed.
arXiv Detail & Related papers (2022-01-09T20:13:27Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Uncertainty-Aware Deep Learning for Autonomous Safe Landing Site
Selection [3.996275177789895]
This paper proposes an uncertainty-aware learning-based method for hazard detection and landing site selection.
It generates a safety prediction map and its uncertainty map together via Bayesian deep learning and semantic segmentation.
It uses the generated uncertainty map to filter out the uncertain pixels in the prediction map so that the safe landing site selection is performed only based on the certain pixels.
arXiv Detail & Related papers (2021-02-21T08:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.