DDOS: The Drone Depth and Obstacle Segmentation Dataset
- URL: http://arxiv.org/abs/2312.12494v2
- Date: Sat, 6 Jul 2024 22:07:10 GMT
- Title: DDOS: The Drone Depth and Obstacle Segmentation Dataset
- Authors: Benedikt Kolbeinsson, Krystian Mikolajczyk,
- Abstract summary: Drone Depth and Obstacle (DDOS) dataset created to provide comprehensive training samples for semantic segmentation and depth estimation.
Specifically designed to enhance the identification of thin structures, DDOS allows drones to navigate a wide range of weather conditions.
- Score: 16.86600007830682
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advancement of autonomous drones, essential for sectors such as remote sensing and emergency services, is hindered by the absence of training datasets that fully capture the environmental challenges present in real-world scenarios, particularly operations in non-optimal weather conditions and the detection of thin structures like wires. We present the Drone Depth and Obstacle Segmentation (DDOS) dataset to fill this critical gap with a collection of synthetic aerial images, created to provide comprehensive training samples for semantic segmentation and depth estimation. Specifically designed to enhance the identification of thin structures, DDOS allows drones to navigate a wide range of weather conditions, significantly elevating drone training and operational safety. Additionally, this work introduces innovative drone-specific metrics aimed at refining the evaluation of algorithms in depth estimation, with a focus on thin structure detection. These contributions not only pave the way for substantial improvements in autonomous drone technology but also set a new benchmark for future research, opening avenues for further advancements in drone navigation and safety.
Related papers
- Drone Acoustic Analysis for Predicting Psychoacoustic Annoyance via Artificial Neural Networks [0.0]
This study builds upon prior research by examining the efficacy of various Deep Learning models in predicting Psychoacoustic Annoyance.
The aim of this research is to improve our understanding of drone noise, aid in the development of noise reduction techniques, and encourage the acceptance of drone usage on public spaces.
arXiv Detail & Related papers (2024-10-29T16:38:34Z) - Drone Stereo Vision for Radiata Pine Branch Detection and Distance Measurement: Integrating SGBM and Segmentation Models [4.730379319834545]
This research proposes the development of a drone-based pruning system equipped with specialized pruning tools and a stereo vision camera.
Deep learning algorithms, including YOLO and Mask R-CNN, are employed to ensure accurate branch detection.
The synergy between these techniques facilitates the precise identification of branch locations and enables efficient, targeted pruning.
arXiv Detail & Related papers (2024-09-26T04:27:44Z) - Segmentation of Drone Collision Hazards in Airborne RADAR Point Clouds
Using PointNet [0.7067443325368975]
A critical prerequisite for the integration is equipping UAVs with enhanced situational awareness to ensure safe operations.
Our study leverages radar technology for novel end-to-end semantic segmentation of aerial point clouds to simultaneously identify multiple collision hazards.
To our knowledge, this is the first approach addressing simultaneous identification of multiple collision threats in an aerial setting, achieving a robust 94% accuracy.
arXiv Detail & Related papers (2023-11-06T16:04:58Z) - VBSF-TLD: Validation-Based Approach for Soft Computing-Inspired Transfer
Learning in Drone Detection [0.0]
This paper presents a transfer-based drone detection scheme, which forms an integral part of a computer vision-based module.
By harnessing the knowledge of pre-trained models from a related domain, transfer learning enables improved results even with limited training data.
Notably, the scheme's effectiveness is highlighted by its IOU-based validation results.
arXiv Detail & Related papers (2023-06-11T22:30:23Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and
Response with AI [55.41644538483948]
Small unmanned aircraft systems (sUAS) are becoming prominent components of many humanitarian assistance and disaster response operations.
We have developed the free and open-source ADAPT multi-mission payload for deploying real-time AI and computer vision onboard a sUAS.
We demonstrate the example mission of real-time, in-flight ice segmentation to monitor river ice state and provide timely predictions of catastrophic flooding events.
arXiv Detail & Related papers (2022-01-25T14:51:19Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - Scarce Data Driven Deep Learning of Drones via Generalized Data
Distribution Space [12.377024173799631]
We show how understanding the general distribution of the drone data via a Generative Adversarial Network (GAN) can allow us to acquire missing data to achieve rapid and more accurate learning.
We demonstrate our results on a drone image dataset, which contains both real drone images as well as simulated images from computer-aided design.
arXiv Detail & Related papers (2021-08-18T17:07:32Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Dogfight: Detecting Drones from Drones Videos [58.158988162743825]
This paper attempts to address the problem of drones detection from other flying drones variations.
The erratic movement of the source and target drones, small size, arbitrary shape, large intensity, and occlusion make this problem quite challenging.
To handle this, instead of using region-proposal based methods, we propose to use a two-stage segmentation-based approach.
arXiv Detail & Related papers (2021-03-31T17:43:31Z) - Detection and Tracking Meet Drones Challenge [131.31749447313197]
This paper presents a review of object detection and tracking datasets and benchmarks, and discusses the challenges of collecting large-scale drone-based object detection and tracking datasets with manual annotations.
We describe our VisDrone dataset, which is captured over various urban/suburban areas of 14 different cities across China from North to South.
We provide a detailed analysis of the current state of the field of large-scale object detection and tracking on drones, and conclude the challenge as well as propose future directions.
arXiv Detail & Related papers (2020-01-16T00:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.