Drone LAMS: A Drone-based Face Detection Dataset with Large Angles and
Many Scenarios
- URL: http://arxiv.org/abs/2011.07689v2
- Date: Sun, 10 Oct 2021 08:12:14 GMT
- Title: Drone LAMS: A Drone-based Face Detection Dataset with Large Angles and
Many Scenarios
- Authors: Yi Luo (1), Siyi Chen (2), X.-G. Ma (2) ((1) School of Energy and
Environment, Southeast University, Nanjing, China (2) International Institute
for Urban Systems Engineering, Southeast University, Nanjing, China)
- Abstract summary: The proposed dataset captured images from 261 videos with over 43k annotations and 4.0k images with pitch or yaw angle in the range of -90deg to 90deg.
Drone LAMS showed significant improvement over currently available drone-based face detection datasets in terms of detection performance.
- Score: 2.4378845585726903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presented a new drone-based face detection dataset Drone LAMS in
order to solve issues of low performance of drone-based face detection in
scenarios such as large angles which was a predominant working condition when a
drone flies high. The proposed dataset captured images from 261 videos with
over 43k annotations and 4.0k images with pitch or yaw angle in the range of
-90{\deg} to 90{\deg}. Drone LAMS showed significant improvement over currently
available drone-based face detection datasets in terms of detection
performance, especially with large pitch and yaw angle. Detailed analysis of
how key factors, such as duplication rate, annotation method, etc., impact
dataset performance was also provided to facilitate further usage of a drone on
face detection.
Related papers
- C2FDrone: Coarse-to-Fine Drone-to-Drone Detection using Vision Transformer Networks [23.133250476580038]
A vision-based drone-to-drone detection system is crucial for various applications like collision avoidance, countering hostile drones, and search-and-rescue operations.
detecting drones presents unique challenges, including small object sizes, distortion, and real-time processing requirements.
We propose a novel coarse-to-fine detection strategy based on vision transformers.
arXiv Detail & Related papers (2024-04-30T05:51:21Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - MOBDrone: a Drone Video Dataset for Man OverBoard Rescue [4.393945242867356]
We release the MOBDrone benchmark, a collection of more than 125K drone-view images in a marine environment under several conditions.
We manually annotated more than 180K objects, of which about 113K man overboard, precisely localizing them with bounding boxes.
We conduct a thorough performance analysis of several state-of-the-art object detectors on the MOBDrone data, serving as baselines for further research.
arXiv Detail & Related papers (2022-03-15T15:02:23Z) - Track Boosting and Synthetic Data Aided Drone Detection [0.0]
Our method approaches the drone detection problem by fine-tuning a YOLOv5 model with real and synthetically generated data.
Our results indicate that augmenting the real data with an optimal subset of synthetic data can increase the performance.
arXiv Detail & Related papers (2021-11-24T10:16:27Z) - Dogfight: Detecting Drones from Drones Videos [58.158988162743825]
This paper attempts to address the problem of drones detection from other flying drones variations.
The erratic movement of the source and target drones, small size, arbitrary shape, large intensity, and occlusion make this problem quite challenging.
To handle this, instead of using region-proposal based methods, we propose to use a two-stage segmentation-based approach.
arXiv Detail & Related papers (2021-03-31T17:43:31Z) - Real-Time Drone Detection and Tracking With Visible, Thermal and
Acoustic Sensors [66.4525391417921]
A thermal infrared camera is shown to be a feasible solution to the drone detection task.
The detector performance as a function of the sensor-to-target distance is also investigated.
A novel video dataset containing 650 annotated infrared and visible videos of drones, birds, airplanes and helicopters is also presented.
arXiv Detail & Related papers (2020-07-14T23:06:42Z) - Multi-Drone based Single Object Tracking with Agent Sharing Network [74.8198920355117]
Multi-Drone single Object Tracking dataset consists of 92 groups of video clips with 113,918 high resolution frames taken by two drones and 63 groups of video clips with 145,875 high resolution frames taken by three drones.
Agent sharing network (ASNet) is proposed by self-supervised template sharing and view-aware fusion of the target from multiple drones.
arXiv Detail & Related papers (2020-03-16T03:27:04Z) - Dense Crowds Detection and Surveillance with Drones using Density Maps [0.0]
In this paper, we test two different state-of-the-art approaches, density map generation with VGG19 trainedwith the Bayes loss function and detect-then-count with FasterRCNN with ResNet50-FPN as backbone.
We show empiricallythat both proposed methodologies perform especially well fordetecting and counting people in sparse crowds when thedrone is near the ground.
arXiv Detail & Related papers (2020-03-03T02:05:47Z) - University-1652: A Multi-view Multi-source Benchmark for Drone-based
Geo-localization [87.74121935246937]
We introduce a new multi-view benchmark for drone-based geo-localization, named University-1652.
University-1652 contains data from three platforms, i.e., synthetic drones, satellites and ground cameras of 1,652 university buildings around the world.
Experiments show that University-1652 helps the model to learn the viewpoint-invariant features and also has good generalization ability in the real-world scenario.
arXiv Detail & Related papers (2020-02-27T15:24:15Z) - Detection and Tracking Meet Drones Challenge [131.31749447313197]
This paper presents a review of object detection and tracking datasets and benchmarks, and discusses the challenges of collecting large-scale drone-based object detection and tracking datasets with manual annotations.
We describe our VisDrone dataset, which is captured over various urban/suburban areas of 14 different cities across China from North to South.
We provide a detailed analysis of the current state of the field of large-scale object detection and tracking on drones, and conclude the challenge as well as propose future directions.
arXiv Detail & Related papers (2020-01-16T00:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.