ViVa-SAFELAND: a New Freeware for Safe Validation of Vision-based Navigation in Aerial Vehicles
- URL: http://arxiv.org/abs/2503.14719v1
- Date: Tue, 18 Mar 2025 20:48:50 GMT
- Title: ViVa-SAFELAND: a New Freeware for Safe Validation of Vision-based Navigation in Aerial Vehicles
- Authors: Miguel S. Soriano-GarcĂa, Diego A. Mercado-Ravell,
- Abstract summary: ViVa-SAFELAND is an open source software library, aimed to test and evaluate vision-based navigation strategies for aerial vehicles.<n>ViVa-SAFELAND provides a new, safe, simple and fair comparison baseline to evaluate and compare different visual navigation solutions.<n>It is useful for training either human of autonomous pilots using deep learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: ViVa-SAFELAND is an open source software library, aimed to test and evaluate vision-based navigation strategies for aerial vehicles, with special interest in autonomous landing, while complying with legal regulations and people's safety. It consists of a collection of high definition aerial videos, focusing on real unstructured urban scenarios, recording moving obstacles of interest, such as cars and people. Then, an Emulated Aerial Vehicle (EAV) with a virtual moving camera is implemented in order to ``navigate" inside the video, according to high-order commands. ViVa-SAFELAND provides a new, safe, simple and fair comparison baseline to evaluate and compare different visual navigation solutions under the same conditions, and to randomize variables along several trials. It also facilitates the development of autonomous landing and navigation strategies, as well as the generation of image datasets for different training tasks. Moreover, it is useful for training either human of autonomous pilots using deep learning. The effectiveness of the framework for validating vision algorithms is demonstrated through two case studies, detection of moving objects and risk assessment segmentation. To our knowledge, this is the first safe validation framework of its kind, to test and compare visual navigation solution for aerial vehicles, which is a crucial aspect for urban deployment in complex real scenarios.
Related papers
- Aerial Vision-and-Language Navigation with Grid-based View Selection and Map Construction [102.70482302750897]
Aerial Vision-and-Language Navigation (Aerial VLN) aims to obtain an unmanned aerial vehicle agent to navigate aerial 3D environments following human instruction.<n>Previous methods struggle to perform well due to the longer navigation path, more complicated 3D scenes, and the neglect of the interplay between vertical and horizontal actions.<n>We propose a novel grid-based view selection framework that formulates aerial VLN action prediction as a grid-based view selection task.
arXiv Detail & Related papers (2025-03-14T05:20:43Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - Vision-Based Autonomous Navigation for Unmanned Surface Vessel in
Extreme Marine Conditions [2.8983738640808645]
This paper presents an autonomous vision-based navigation framework for tracking target objects in extreme marine conditions.
The proposed framework has been thoroughly tested in simulation under extremely reduced visibility due to sandstorms and fog.
The results are compared with state-of-the-art de-hazing methods across the benchmarked MBZIRC simulation dataset.
arXiv Detail & Related papers (2023-08-08T14:25:13Z) - Aeolus Ocean -- A simulation environment for the autonomous
COLREG-compliant navigation of Unmanned Surface Vehicles using Deep
Reinforcement Learning and Maritime Object Detection [0.0]
navigational autonomy in unmanned surface vehicles (USVs) in the maritime sector can lead to safer waters as well as reduced operating costs.
We describe the novel development of a COLREG-compliant DRL-based collision avoidant navigational system with CV-based awareness in a realistic ocean simulation environment.
arXiv Detail & Related papers (2023-07-13T11:20:18Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Visual-based Safe Landing for UAVs in Populated Areas: Real-time
Validation in Virtual Environments [0.0]
We propose a framework for real-time safe and thorough evaluation of vision-based autonomous landing in populated scenarios.
We propose to use the Unreal graphics engine coupled with the AirSim plugin for drone's simulation.
We study two different criteria for selecting the "best" SLZ, and evaluate them during autonomous landing of a virtual drone in different scenarios.
arXiv Detail & Related papers (2022-03-25T17:22:24Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Safe Vessel Navigation Visually Aided by Autonomous Unmanned Aerial
Vehicles in Congested Harbors and Waterways [9.270928705464193]
This work is the first attempt to detect and estimate distances to unknown objects from long-range visual data captured with conventional RGB cameras and auxiliary absolute positioning systems (e.g. GPS)
The simulation results illustrate the accuracy and efficacy of the proposed method for visually aided navigation of vessels assisted by UAV.
arXiv Detail & Related papers (2021-08-09T08:15:17Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.