Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments
- URL: http://arxiv.org/abs/2203.13792v1
- Date: Fri, 25 Mar 2022 17:22:24 GMT
- Title: Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments
- Authors: Hector Tovanche-Picon, Javier Gonzalez-Trejo, Angel Flores-Abad and
Diego Mercado-Ravell
- Abstract summary: We propose a framework for real-time safe and thorough evaluation of vision-based autonomous landing in populated scenarios.
We propose to use the Unreal graphics engine coupled with the AirSim plugin for drone's simulation.
We study two different criteria for selecting the "best" SLZ, and evaluate them during autonomous landing of a virtual drone in different scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safe autonomous landing for Unmanned Aerial Vehicles (UAVs) in populated
areas is a crucial aspect for successful urban deployment, particularly in
emergency landing situations. Nonetheless, validating autonomous landing in
real scenarios is a challenging task involving a high risk of injuring people.
In this work, we propose a framework for real-time safe and thorough evaluation
of vision-based autonomous landing in populated scenarios, using
photo-realistic virtual environments. We propose to use the Unreal graphics
engine coupled with the AirSim plugin for drone's simulation, and evaluate
autonomous landing strategies based on visual detection of Safe Landing Zones
(SLZ) in populated scenarios. Then, we study two different criteria for
selecting the "best" SLZ, and evaluate them during autonomous landing of a
virtual drone in different scenarios and conditions, under different
distributions of people in urban scenes, including moving people. We evaluate
different metrics to quantify the performance of the landing strategies,
establishing a baseline for comparison with future works in this challenging
task, and analyze them through an important number of randomized iterations.
The study suggests that the use of the autonomous landing algorithms
considerably helps to prevent accidents involving humans, which may allow to
unleash the full potential of drones in urban environments near to people.
Related papers
- Risk Assessment for Autonomous Landing in Urban Environments using Semantic Segmentation [0.0]
We propose employing the SegFormer, a state-of-the-art visual transformer network, for semantic segmentation of urban environments.
The proposed strategy is validated through several case studies.
We believe will help unleash the full potential of UAVs on civil applications within urban areas.
arXiv Detail & Related papers (2024-10-16T19:34:03Z) - Multi-UAV Pursuit-Evasion with Online Planning in Unknown Environments by Deep Reinforcement Learning [16.761470423715338]
Multi-UAV pursuit-evasion poses a key challenge for UAV swarm intelligence.
We introduce an evader prediction-enhanced network to tackle partial observability in cooperative strategy learning.
We derive a feasible policy via a two-stage reward refinement and deploy the policy on real quadrotors in a zero-shot manner.
arXiv Detail & Related papers (2024-09-24T08:40:04Z) - ReGentS: Real-World Safety-Critical Driving Scenario Generation Made Stable [88.08120417169971]
Machine learning based autonomous driving systems often face challenges with safety-critical scenarios that are rare in real-world data.
This work explores generating safety-critical driving scenarios by modifying complex real-world regular scenarios through trajectory optimization.
Our approach addresses unrealistic diverging trajectories and unavoidable collision scenarios that are not useful for training robust planner.
arXiv Detail & Related papers (2024-09-12T08:26:33Z) - UniSim: A Neural Closed-Loop Sensor Simulator [76.79818601389992]
We present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle.
UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene.
We incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions.
arXiv Detail & Related papers (2023-08-03T17:56:06Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Robust Autonomous Landing of UAV in Non-Cooperative Environments based
on Dynamic Time Camera-LiDAR Fusion [11.407952542799526]
We construct a UAV system equipped with low-cost LiDAR and binocular cameras to realize autonomous landing in non-cooperative environments.
Taking advantage of the non-repetitive scanning and high FOV coverage characteristics of LiDAR, we come up with a dynamic time depth completion algorithm.
Based on the depth map, the high-level terrain information such as slope, roughness, and the size of the safe area are derived.
arXiv Detail & Related papers (2020-11-27T14:47:02Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.