You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings
- URL: http://arxiv.org/abs/2303.04891v1
- Date: Wed, 8 Mar 2023 21:11:51 GMT
- Title: You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings
- Authors: Timothy Chase Jr, Chris Gnam, John Crassidis, Karthik Dantu
- Abstract summary: A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
- Score: 7.201292864036088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The detection of hazardous terrain during the planetary landing of spacecraft
plays a critical role in assuring vehicle safety and mission success. A cheap
and effective way of detecting hazardous terrain is through the use of visual
cameras, which ensure operational ability from atmospheric entry through
touchdown. Plagued by resource constraints and limited computational power,
traditional techniques for visual hazardous terrain detection focus on template
matching and registration to pre-built hazard maps. Although successful on
previous missions, this approach is restricted to the specificity of the
templates and limited by the fidelity of the underlying hazard map, which both
require extensive pre-flight cost and effort to obtain and develop. Terrestrial
systems that perform a similar task in applications such as autonomous driving
utilize state-of-the-art deep learning techniques to successfully localize and
classify navigation hazards. Advancements in spacecraft co-processors aimed at
accelerating deep learning inference enable the application of these methods in
space for the first time. In this work, we introduce You Only Crash Once
(YOCO), a deep learning-based visual hazardous terrain detection and
classification technique for autonomous spacecraft planetary landings. Through
the use of unsupervised domain adaptation we tailor YOCO for training by
simulation, removing the need for real-world annotated data and expensive
mission surveying phases. We further improve the transfer of representative
terrain knowledge between simulation and the real world through visual
similarity clustering. We demonstrate the utility of YOCO through a series of
terrestrial and extraterrestrial simulation-to-real experiments and show
substantial improvements toward the ability to both detect and accurately
classify instances of planetary terrain.
Related papers
- Vision-Based Detection of Uncooperative Targets and Components on Small Satellites [6.999319023465766]
Space debris and inactive satellites pose a threat to the safety and integrity of operational spacecraft.
Recent advancements in computer vision models can be used to improve upon existing methods for tracking such uncooperative targets.
This paper introduces an autonomous detection model designed to identify and monitor these objects using learning and computer vision.
arXiv Detail & Related papers (2024-08-22T02:48:13Z) - GARL: Genetic Algorithm-Augmented Reinforcement Learning to Detect Violations in Marker-Based Autonomous Landing Systems [0.7461036096470347]
Traditional offline testing methods miss violation cases caused by dynamic objects like people and animals.
Online testing methods require extensive training time, which is impractical with limited budgets.
We introduce GARL, a framework combining a genetic algorithm (GA) and reinforcement learning (RL) for efficient generation of diverse and real landing system failures.
arXiv Detail & Related papers (2023-10-11T10:54:01Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - Language-Guided 3D Object Detection in Point Cloud for Autonomous
Driving [91.91552963872596]
We propose a new multi-modal visual grounding task, termed LiDAR Grounding.
It jointly learns the LiDAR-based object detector with the language features and predicts the targeted region directly from the detector.
Our work offers a deeper insight into the LiDAR-based grounding task and we expect it presents a promising direction for the autonomous driving community.
arXiv Detail & Related papers (2023-05-25T06:22:10Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - Deep Monocular Hazard Detection for Safe Small Body Landing [12.922946578413578]
Hazard detection and avoidance is a key technology for future robotic small body sample return and lander missions.
We propose a novel safety mapping paradigm that leverages deep semantic segmentation techniques to predict landing safety directly from a single monocular image.
We demonstrate precise and accurate safety mapping performance on real in-situ imagery of prospective sample sites from the OSIRIS-REx mission.
arXiv Detail & Related papers (2023-01-30T19:40:46Z) - Detection and Initial Assessment of Lunar Landing Sites Using Neural
Networks [0.0]
This paper will focus on a passive autonomous hazard detection and avoidance sub-system to generate an initial assessment of possible landing regions for the guidance system.
The system uses a single camera and the MobileNetV2 neural network architecture to detect and discern between safe landing sites and hazards such as rocks, shadows, and craters.
arXiv Detail & Related papers (2022-07-23T04:29:18Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.