A Dataset of Stationary, Fixed-wing Aircraft on a Collision Course for
Vision-Based Sense and Avoid
- URL: http://arxiv.org/abs/2112.02735v1
- Date: Mon, 6 Dec 2021 01:55:49 GMT
- Title: A Dataset of Stationary, Fixed-wing Aircraft on a Collision Course for
Vision-Based Sense and Avoid
- Authors: Jasmin Martin, Jenna Riseley and Jason J. Ford
- Abstract summary: This paper presents a dataset for vision based aircraft detection.
The dataset consists of 15 image sequences containing 55,521 images of a fixed-wing aircraft approaching a stationary, grounded camera.
To our knowledge, this is the first public dataset for studying medium sized, fixed-wing aircraft on a collision course with the observer.
- Score: 0.6445605125467572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emerging global market for unmanned aerial vehicle (UAV) services is
anticipated to reach USD 58.4 billion by 2026, spurring significant efforts to
safely integrate routine UAV operations into the national airspace in a manner
that they do not compromise the existing safety levels. The commercial use of
UAVs would be enhanced by an ability to sense and avoid potential mid-air
collision threats however research in this field is hindered by the lack of
available datasets as they are expensive and technically complex to capture. In
this paper we present a dataset for vision based aircraft detection. The
dataset consists of 15 image sequences containing 55,521 images of a fixed-wing
aircraft approaching a stationary, grounded camera. Ground truth labels and a
performance benchmark are also provided. To our knowledge, this is the first
public dataset for studying medium sized, fixed-wing aircraft on a collision
course with the observer. The full dataset and ground truth labels are publicly
available at https://qcr.github.io/dataset/aircraft-collision-course/.
Related papers
- Commissioning An All-Sky Infrared Camera Array for Detection Of Airborne Objects [0.11703603440337004]
The Galileo Project is designing, building, and commissioning a multi-modal ground-based observatory to continuously monitor the sky.
One of the key instruments is an all-sky infrared camera array using eight uncooled long-wave infrared FLIR Boson 640 cameras.
We report acceptance rates (e.g. viewable airplanes that are recorded) and detection efficiencies (e.g. recorded airplanes which are successfully detected) for a variety of weather conditions.
A toy outlier search focused on large sinuosity of the 2-D reconstructed trajectories flags about 16% of trajectories as outliers.
arXiv Detail & Related papers (2024-11-12T17:31:51Z) - MMAUD: A Comprehensive Multi-Modal Anti-UAV Dataset for Modern Miniature
Drone Threats [37.981623262267036]
MMAUD addresses a critical gap in contemporary threat detection methodologies by focusing on drone detection, UAV-type classification, and trajectory estimation.
It offers a unique overhead aerial detection vital for addressing real-world scenarios with higher fidelity than datasets captured on specific vantage points using thermal and RGB.
Our proposed modalities are cost-effective and highly adaptable, allowing users to experiment and implement new UAV threat detection tools.
arXiv Detail & Related papers (2024-02-06T04:57:07Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - Segmentation of Drone Collision Hazards in Airborne RADAR Point Clouds
Using PointNet [0.7067443325368975]
A critical prerequisite for the integration is equipping UAVs with enhanced situational awareness to ensure safe operations.
Our study leverages radar technology for novel end-to-end semantic segmentation of aerial point clouds to simultaneously identify multiple collision hazards.
To our knowledge, this is the first approach addressing simultaneous identification of multiple collision threats in an aerial setting, achieving a robust 94% accuracy.
arXiv Detail & Related papers (2023-11-06T16:04:58Z) - Evidential Detection and Tracking Collaboration: New Problem, Benchmark
and Algorithm for Robust Anti-UAV System [56.51247807483176]
Unmanned Aerial Vehicles (UAVs) have been widely used in many areas, including transportation, surveillance, and military.
Previous works have simplified such an anti-UAV task as a tracking problem, where prior information of UAVs is always provided.
In this paper, we first formulate a new and practical anti-UAV problem featuring the UAVs perception in complex scenes without prior UAVs information.
arXiv Detail & Related papers (2023-06-27T19:30:23Z) - AirTrack: Onboard Deep Learning Framework for Long-Range Aircraft
Detection and Tracking [3.3773749296727535]
AirTrack is a real-time vision-only detect and tracking framework that respects the size, weight, and power constraints of sUAS systems.
We show that AirTrack outperforms state-of-the art baselines on the Amazon Airborne Object Tracking (AOT)
Empirical evaluations show that our system has a probability of track of more than 95% up to a range of 700m.
arXiv Detail & Related papers (2022-09-26T16:58:00Z) - VPAIR -- Aerial Visual Place Recognition and Localization in Large-scale
Outdoor Environments [49.82314641876602]
We present a new dataset named VPAIR.
The dataset was recorded on board a light aircraft flying at an altitude of more than 300 meters above ground.
The dataset covers a more than one hundred kilometers long trajectory over various types of challenging landscapes.
arXiv Detail & Related papers (2022-05-23T18:50:08Z) - Attention-based Reinforcement Learning for Real-Time UAV Semantic
Communication [53.46235596543596]
We study the problem of air-to-ground ultra-reliable and low-latency communication (URLLC) for a moving ground user.
We propose a novel multi-agent deep reinforcement learning framework, coined a graph attention exchange network (GAXNet)
GAXNet achieves 6.5x lower latency with the target 0.0000001 error rate, compared to a state-of-the-art baseline framework.
arXiv Detail & Related papers (2021-05-22T12:43:25Z) - Object Detection in Aerial Images: A Large-Scale Benchmark and
Challenges [124.48654341780431]
We present a large-scale dataset of Object deTection in Aerial images (DOTA) and comprehensive baselines for ODAI.
The proposed DOTA dataset contains 1,793,658 object instances of 18 categories of oriented-bounding-box annotations collected from 11,268 aerial images.
We build baselines covering 10 state-of-the-art algorithms with over 70 configurations, where the speed and accuracy performances of each model have been evaluated.
arXiv Detail & Related papers (2021-02-24T11:20:55Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - AU-AIR: A Multi-modal Unmanned Aerial Vehicle Dataset for Low Altitude
Traffic Surveillance [20.318367304051176]
Unmanned aerial vehicles (UAVs) with mounted cameras have the advantage of capturing aerial (bird-view) images.
Several aerial datasets have been introduced, including visual data with object annotations.
We propose a multi-purpose aerial dataset (AU-AIR) that has multi-modal sensor data collected in real-world outdoor environments.
arXiv Detail & Related papers (2020-01-31T09:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.