Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments
- URL: http://arxiv.org/abs/2203.13792v1
- Date: Fri, 25 Mar 2022 17:22:24 GMT
- Title: Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments
- Authors: Hector Tovanche-Picon, Javier Gonzalez-Trejo, Angel Flores-Abad and
Diego Mercado-Ravell
- Abstract summary: We propose a framework for real-time safe and thorough evaluation of vision-based autonomous landing in populated scenarios.
We propose to use the Unreal graphics engine coupled with the AirSim plugin for drone's simulation.
We study two different criteria for selecting the "best" SLZ, and evaluate them during autonomous landing of a virtual drone in different scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safe autonomous landing for Unmanned Aerial Vehicles (UAVs) in populated
areas is a crucial aspect for successful urban deployment, particularly in
emergency landing situations. Nonetheless, validating autonomous landing in
real scenarios is a challenging task involving a high risk of injuring people.
In this work, we propose a framework for real-time safe and thorough evaluation
of vision-based autonomous landing in populated scenarios, using
photo-realistic virtual environments. We propose to use the Unreal graphics
engine coupled with the AirSim plugin for drone's simulation, and evaluate
autonomous landing strategies based on visual detection of Safe Landing Zones
(SLZ) in populated scenarios. Then, we study two different criteria for
selecting the "best" SLZ, and evaluate them during autonomous landing of a
virtual drone in different scenarios and conditions, under different
distributions of people in urban scenes, including moving people. We evaluate
different metrics to quantify the performance of the landing strategies,
establishing a baseline for comparison with future works in this challenging
task, and analyze them through an important number of randomized iterations.
The study suggests that the use of the autonomous landing algorithms
considerably helps to prevent accidents involving humans, which may allow to
unleash the full potential of drones in urban environments near to people.
Related papers
- SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a novel diffusion-based controllable closed-loop safety-critical simulation framework.
We develop a novel approach to simulate safety-critical scenarios through an adversarial term in the denoising process.
We validate our framework empirically using the NuScenes dataset, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - UniSim: A Neural Closed-Loop Sensor Simulator [76.79818601389992]
We present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle.
UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene.
We incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions.
arXiv Detail & Related papers (2023-08-03T17:56:06Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft
Feature Detection [0.0]
Real-time, automated spacecraft feature recognition is needed to pinpoint the locations of collision hazards.
New algorithm SpaceYOLO fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on human-inspired decision processes.
Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments.
arXiv Detail & Related papers (2023-02-02T02:11:39Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Robust Autonomous Landing of UAV in Non-Cooperative Environments based
on Dynamic Time Camera-LiDAR Fusion [11.407952542799526]
We construct a UAV system equipped with low-cost LiDAR and binocular cameras to realize autonomous landing in non-cooperative environments.
Taking advantage of the non-repetitive scanning and high FOV coverage characteristics of LiDAR, we come up with a dynamic time depth completion algorithm.
Based on the depth map, the high-level terrain information such as slope, roughness, and the size of the safe area are derived.
arXiv Detail & Related papers (2020-11-27T14:47:02Z) - Adversarial Evaluation of Autonomous Vehicles in Lane-Change Scenarios [10.53961877853783]
We propose an adaptive evaluation framework to efficiently evaluate autonomous vehicles in adversarial environments.
Considering the multimodal nature of dangerous scenarios, we use ensemble models to represent different local optimums for diversity.
Results show that the adversarial scenarios generated by our method significantly degrade the performance of the tested vehicles.
arXiv Detail & Related papers (2020-04-14T14:12:17Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.