DrIFT: Autonomous Drone Dataset with Integrated Real and Synthetic Data, Flexible Views, and Transformed Domains
- URL: http://arxiv.org/abs/2412.04789v1
- Date: Fri, 06 Dec 2024 05:47:55 GMT
- Title: DrIFT: Autonomous Drone Dataset with Integrated Real and Synthetic Data, Flexible Views, and Transformed Domains
- Authors: Fardad Dadboud, Hamid Azad, Varun Mehta, Miodrag Bolic, Iraj Mntegh,
- Abstract summary: We present the DrIFT dataset, developed for visual drone detection under domain shifts.
DrIFT includes fourteen distinct domains, each characterized by shifts in point of view, synthetic-to-real data, season, and adverse weather.
We use the MCDO-map in our uncertainty-aware unsupervised domain adaptation method, demonstrating superior performance to SOTA unsupervised domain adaptation techniques.
- Score: 0.0
- License:
- Abstract: Dependable visual drone detection is crucial for the secure integration of drones into the airspace. However, drone detection accuracy is significantly affected by domain shifts due to environmental changes, varied points of view, and background shifts. To address these challenges, we present the DrIFT dataset, specifically developed for visual drone detection under domain shifts. DrIFT includes fourteen distinct domains, each characterized by shifts in point of view, synthetic-to-real data, season, and adverse weather. DrIFT uniquely emphasizes background shift by providing background segmentation maps to enable background-wise metrics and evaluation. Our new uncertainty estimation metric, MCDO-map, features lower postprocessing complexity, surpassing traditional methods. We use the MCDO-map in our uncertainty-aware unsupervised domain adaptation method, demonstrating superior performance to SOTA unsupervised domain adaptation techniques. The dataset is available at: https://github.com/CARG-uOttawa/DrIFT.git.
Related papers
- A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Drone Detection using Deep Neural Networks Trained on Pure Synthetic Data [0.4369058206183195]
We present a drone detection Faster-RCNN model trained on a purely synthetic dataset that transfers to real-world data.
Our results show that using synthetic data for drone detection has the potential to reduce data collection costs and improve labelling quality.
arXiv Detail & Related papers (2024-11-13T23:09:53Z) - DroBoost: An Intelligent Score and Model Boosting Method for Drone Detection [1.2564343689544843]
Drone detection is a challenging object detection task where visibility conditions and quality of the images may be unfavorable.
Our work improves on the previous approach by combining several improvements.
The proposed technique won 1st Place in the Drone vs. Bird Challenge.
arXiv Detail & Related papers (2024-06-30T20:49:56Z) - DDOS: The Drone Depth and Obstacle Segmentation Dataset [16.86600007830682]
Drone Depth and Obstacle (DDOS) dataset created to provide comprehensive training samples for semantic segmentation and depth estimation.
Specifically designed to enhance the identification of thin structures, DDOS allows drones to navigate a wide range of weather conditions.
arXiv Detail & Related papers (2023-12-19T18:54:40Z) - A Two-Dimensional Deep Network for RF-based Drone Detection and
Identification Towards Secure Coverage Extension [7.717171534776764]
We use Short-Time Fourier Transform to extract two-dimensional features from the raw signals, which contain both time-domain and frequency-domain information.
Then, we employ a Convolutional Neural Network (CNN) built with ResNet structure to achieve multi-class classifications.
Our experimental results show that the proposed ResNet-STFT can achieve higher accuracy and faster convergence on the extended dataset.
arXiv Detail & Related papers (2023-08-26T15:43:39Z) - VBSF-TLD: Validation-Based Approach for Soft Computing-Inspired Transfer
Learning in Drone Detection [0.0]
This paper presents a transfer-based drone detection scheme, which forms an integral part of a computer vision-based module.
By harnessing the knowledge of pre-trained models from a related domain, transfer learning enables improved results even with limited training data.
Notably, the scheme's effectiveness is highlighted by its IOU-based validation results.
arXiv Detail & Related papers (2023-06-11T22:30:23Z) - Deep Metric Learning for Unsupervised Remote Sensing Change Detection [60.89777029184023]
Remote Sensing Change Detection (RS-CD) aims to detect relevant changes from Multi-Temporal Remote Sensing Images (MT-RSIs)
The performance of existing RS-CD methods is attributed to training on large annotated datasets.
This paper proposes an unsupervised CD method based on deep metric learning that can deal with both of these issues.
arXiv Detail & Related papers (2023-03-16T17:52:45Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.