Deep Monocular Hazard Detection for Safe Small Body Landing
- URL: http://arxiv.org/abs/2301.13254v1
- Date: Mon, 30 Jan 2023 19:40:46 GMT
- Title: Deep Monocular Hazard Detection for Safe Small Body Landing
- Authors: Travis Driver, Kento Tomita, Koki Ho, Panagiotis Tsiotras
- Abstract summary: Hazard detection and avoidance is a key technology for future robotic small body sample return and lander missions.
We propose a novel safety mapping paradigm that leverages deep semantic segmentation techniques to predict landing safety directly from a single monocular image.
We demonstrate precise and accurate safety mapping performance on real in-situ imagery of prospective sample sites from the OSIRIS-REx mission.
- Score: 12.922946578413578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hazard detection and avoidance is a key technology for future robotic small
body sample return and lander missions. Current state-of-the-practice methods
rely on high-fidelity, a priori terrain maps, which require extensive
human-in-the-loop verification and expensive reconnaissance campaigns to
resolve mapping uncertainties. We propose a novel safety mapping paradigm that
leverages deep semantic segmentation techniques to predict landing safety
directly from a single monocular image, thus reducing reliance on
high-fidelity, a priori data products. We demonstrate precise and accurate
safety mapping performance on real in-situ imagery of prospective sample sites
from the OSIRIS-REx mission.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - USC: Uncompromising Spatial Constraints for Safety-Oriented 3D Object Detectors in Autonomous Driving [7.355977594790584]
We consider the safety-oriented performance of 3D object detectors in autonomous driving contexts.
We present uncompromising spatial constraints (USC), which characterize a simple yet important localization requirement.
We incorporate the quantitative measures into common loss functions to enable safety-oriented fine-tuning for existing models.
arXiv Detail & Related papers (2022-09-21T14:03:08Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Detection and Initial Assessment of Lunar Landing Sites Using Neural
Networks [0.0]
This paper will focus on a passive autonomous hazard detection and avoidance sub-system to generate an initial assessment of possible landing regions for the guidance system.
The system uses a single camera and the MobileNetV2 neural network architecture to detect and discern between safe landing sites and hazards such as rocks, shadows, and craters.
arXiv Detail & Related papers (2022-07-23T04:29:18Z) - Robust and Precise Facial Landmark Detection by Self-Calibrated Pose
Attention Network [73.56802915291917]
We propose a semi-supervised framework to achieve more robust and precise facial landmark detection.
A Boundary-Aware Landmark Intensity (BALI) field is proposed to model more effective facial shape constraints.
A Self-Calibrated Pose Attention (SCPA) model is designed to provide a self-learned objective function that enforces intermediate supervision.
arXiv Detail & Related papers (2021-12-23T02:51:08Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Detect and Locate: A Face Anti-Manipulation Approach with Semantic and
Noise-level Supervision [67.73180660609844]
We propose a conceptually simple but effective method to efficiently detect forged faces in an image.
The proposed scheme relies on a segmentation map that delivers meaningful high-level semantic information clues about the image.
The proposed model achieves state-of-the-art detection accuracy and remarkable localization performance.
arXiv Detail & Related papers (2021-07-13T02:59:31Z) - MODS -- A USV-oriented object detection and obstacle segmentation
benchmark [12.356257470551348]
We introduce a new obstacle detection benchmark MODS, which considers two major perception tasks: maritime object detection and the more general maritime obstacle segmentation.
We present a new diverse maritime evaluation dataset containing approximately 81k stereo images synchronized with an on-board IMU, with over 60k objects annotated.
We propose a new obstacle segmentation performance evaluation protocol that reflects the detection accuracy in a way meaningful for practical USV navigation.
arXiv Detail & Related papers (2021-05-05T22:40:27Z) - Uncertainty-Aware Deep Learning for Autonomous Safe Landing Site
Selection [3.996275177789895]
This paper proposes an uncertainty-aware learning-based method for hazard detection and landing site selection.
It generates a safety prediction map and its uncertainty map together via Bayesian deep learning and semantic segmentation.
It uses the generated uncertainty map to filter out the uncertain pixels in the prediction map so that the safe landing site selection is performed only based on the certain pixels.
arXiv Detail & Related papers (2021-02-21T08:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.