Risk Assessment for Autonomous Landing in Urban Environments using Semantic Segmentation
- URL: http://arxiv.org/abs/2410.12988v1
- Date: Wed, 16 Oct 2024 19:34:03 GMT
- Title: Risk Assessment for Autonomous Landing in Urban Environments using Semantic Segmentation
- Authors: Jesús Alejandro Loera-Ponce, Diego A. Mercado-Ravell, Israel Becerra-Durán, Luis Manuel Valentin-Coronado,
- Abstract summary: We propose employing the SegFormer, a state-of-the-art visual transformer network, for semantic segmentation of urban environments.
The proposed strategy is validated through several case studies.
We believe will help unleash the full potential of UAVs on civil applications within urban areas.
- Score: 0.0
- License:
- Abstract: In this paper, we address the vision-based autonomous landing problem in complex urban environments using deep neural networks for semantic segmentation and risk assessment. We propose employing the SegFormer, a state-of-the-art visual transformer network, for the semantic segmentation of complex, unstructured urban environments. This approach yields valuable information that can be utilized in smart autonomous landing missions, particularly in emergency landing scenarios resulting from system failures or human errors. The assessment is done in real-time flight, when images of an RGB camera at the Unmanned Aerial Vehicle (UAV) are segmented with the SegFormer into the most common classes found in urban environments. These classes are then mapped into a level of risk, considering in general, potential material damage, damaging the drone itself and endanger people. The proposed strategy is validated through several case studies, demonstrating the huge potential of semantic segmentation-based strategies to determining the safest landing areas for autonomous emergency landing, which we believe will help unleash the full potential of UAVs on civil applications within urban areas.
Related papers
- StreetSurfGS: Scalable Urban Street Surface Reconstruction with Planar-based Gaussian Splatting [85.67616000086232]
StreetSurfGS is first method to employ Gaussian Splatting specifically tailored for scalable urban street scene surface reconstruction.
StreetSurfGS utilizes a planar-based octree representation and segmented training to reduce memory costs, accommodate unique camera characteristics, and ensure scalability.
To address sparse views and multi-scale challenges, we use a dual-step matching strategy that leverages adjacent and long-term information.
arXiv Detail & Related papers (2024-10-06T04:21:59Z) - Cross-View Geolocalization and Disaster Mapping with Street-View and VHR Satellite Imagery: A Case Study of Hurricane IAN [9.128051274958356]
We propose a novel disaster mapping framework, namely CVDisaster, to simultaneously address geolocalization and damage perception estimation.
CVDisaster consists of two cross-view models, where CVDisaster-Geoloc refers to a cross-view geolocalization model.
We show that CVDisaster can achieve highly competitive performance (over 80% for geolocalization and 75% for damage perception estimation) with even limited fine-tuning efforts.
arXiv Detail & Related papers (2024-08-13T09:37:26Z) - MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility [52.0930915607703]
Recent advances in Robotics and Embodied AI make public urban spaces no longer exclusive to humans.
Micromobility enabled by AI for short-distance travel in public urban spaces plays a crucial component in the future transportation system.
We present MetaUrban, a compositional simulation platform for the AI-driven urban micromobility research.
arXiv Detail & Related papers (2024-07-11T17:56:49Z) - Enhancing Safety for Autonomous Agents in Partly Concealed Urban Traffic Environments Through Representation-Based Shielding [2.9685635948300004]
We propose a novel state representation for Reinforcement Learning (RL) agents centered around the information perceivable by an autonomous agent.
Our findings pave the way for more robust and reliable autonomous navigation strategies.
arXiv Detail & Related papers (2024-07-05T08:34:49Z) - Learning to Assess Danger from Movies for Cooperative Escape Planning in
Hazardous Environments [4.042350304426974]
It is difficult to replicate such scenarios in the real world, which is necessary for training and testing purposes.
Current systems are not fully able to take advantage of the rich multi-modal data available in such hazardous environments.
We propose to harness the enormous amount of visual content available in the form of movies and TV shows, and develop a dataset that can represent hazardous environments encountered in the real world.
arXiv Detail & Related papers (2022-07-27T21:07:15Z) - Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments [0.0]
We propose a framework for real-time safe and thorough evaluation of vision-based autonomous landing in populated scenarios.
We propose to use the Unreal graphics engine coupled with the AirSim plugin for drone's simulation.
We study two different criteria for selecting the "best" SLZ, and evaluate them during autonomous landing of a virtual drone in different scenarios.
arXiv Detail & Related papers (2022-03-25T17:22:24Z) - ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and
Response with AI [55.41644538483948]
Small unmanned aircraft systems (sUAS) are becoming prominent components of many humanitarian assistance and disaster response operations.
We have developed the free and open-source ADAPT multi-mission payload for deploying real-time AI and computer vision onboard a sUAS.
We demonstrate the example mission of real-time, in-flight ice segmentation to monitor river ice state and provide timely predictions of catastrophic flooding events.
arXiv Detail & Related papers (2022-01-25T14:51:19Z) - Explainable, automated urban interventions to improve pedestrian and
vehicle safety [0.8620335948752805]
This paper combines public data sources, large-scale street imagery and computer vision techniques to approach pedestrian and vehicle safety.
The steps involved in this pipeline include the adaptation and training of a Residual Convolutional Neural Network to determine a hazard index for each given urban scene.
The outcome of this computational approach is a fine-grained map of hazard levels across a city, and an identify interventions that might simultaneously improve pedestrian and vehicle safety.
arXiv Detail & Related papers (2021-10-22T09:17:39Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles [76.46575807165729]
We propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system.
By simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack.
arXiv Detail & Related papers (2021-01-16T23:23:12Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.