EVPropNet: Detecting Drones By Finding Propellers For Mid-Air Landing
And Following
- URL: http://arxiv.org/abs/2106.15045v1
- Date: Tue, 29 Jun 2021 01:16:01 GMT
- Title: EVPropNet: Detecting Drones By Finding Propellers For Mid-Air Landing
And Following
- Authors: Nitin J. Sanket, Chahat Deep Singh, Chethan M. Parameshwara, Cornelia
Ferm\"uller, Guido C.H.E. de Croon, Yiannis Aloimonos
- Abstract summary: Drone propellers are the fastest moving parts of an image and cannot be directly "seen" by a classical camera without severe motion blur.
We train a deep neural network called EVPropNet to detect propellers from the data of an event camera.
We present two applications of our network: (a) tracking and following an unmarked drone and (b) landing on a near-hover drone.
- Score: 11.79762223888294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid rise of accessibility of unmanned aerial vehicles or drones pose a
threat to general security and confidentiality. Most of the commercially
available or custom-built drones are multi-rotors and are comprised of multiple
propellers. Since these propellers rotate at a high-speed, they are generally
the fastest moving parts of an image and cannot be directly "seen" by a
classical camera without severe motion blur. We utilize a class of sensors that
are particularly suitable for such scenarios called event cameras, which have a
high temporal resolution, low-latency, and high dynamic range.
In this paper, we model the geometry of a propeller and use it to generate
simulated events which are used to train a deep neural network called EVPropNet
to detect propellers from the data of an event camera. EVPropNet directly
transfers to the real world without any fine-tuning or retraining. We present
two applications of our network: (a) tracking and following an unmarked drone
and (b) landing on a near-hover drone. We successfully evaluate and demonstrate
the proposed approach in many real-world experiments with different propeller
shapes and sizes. Our network can detect propellers at a rate of 85.1% even
when 60% of the propeller is occluded and can run at upto 35Hz on a 2W power
budget. To our knowledge, this is the first deep learning-based solution for
detecting propellers (to detect drones). Finally, our applications also show an
impressive success rate of 92% and 90% for the tracking and landing tasks
respectively.
Related papers
- Deep Ensemble for Rotorcraft Attitude Prediction [0.0]
The rotorcraft community has experienced a higher fatal accident rate than other aviation segments.
Recent advancements in artificial intelligence (AI) provide an opportunity to help design systems that can address rotorcraft safety challenges.
arXiv Detail & Related papers (2023-06-29T17:06:42Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z) - Deep Learning on Home Drone: Searching for the Optimal Architecture [54.535788447839884]
We suggest the first system that runs real-time semantic segmentation via deep learning on a weak micro-computer such as the Raspberry Pi Zero v2 attached to a toy-drone.
In particular, since the Raspberry Pi weighs less than $16$ grams, and its size is half of a credit card, we could easily attach it to the common commercial DJI Tello toy-drone.
The result is an autonomous drone that can detect and classify objects in real-time from a video stream of an on-board monocular RGB camera.
arXiv Detail & Related papers (2022-09-21T11:41:45Z) - Dogfight: Detecting Drones from Drones Videos [58.158988162743825]
This paper attempts to address the problem of drones detection from other flying drones variations.
The erratic movement of the source and target drones, small size, arbitrary shape, large intensity, and occlusion make this problem quite challenging.
To handle this, instead of using region-proposal based methods, we propose to use a two-stage segmentation-based approach.
arXiv Detail & Related papers (2021-03-31T17:43:31Z) - Drone LAMS: A Drone-based Face Detection Dataset with Large Angles and
Many Scenarios [2.4378845585726903]
The proposed dataset captured images from 261 videos with over 43k annotations and 4.0k images with pitch or yaw angle in the range of -90deg to 90deg.
Drone LAMS showed significant improvement over currently available drone-based face detection datasets in terms of detection performance.
arXiv Detail & Related papers (2020-11-16T02:26:05Z) - Relative Drone-Ground Vehicle Localization using LiDAR and Fisheye
Cameras through Direct and Indirect Observations [0.0]
We present a LiDAR-camera-based relative pose estimation method between a drone and a ground vehicle.
We propose a dynamically adaptive kernel-based method for drone detection and tracking using the LiDAR.
In our experiments, we were able to achieve very fast initial detection and real-time tracking of the drone.
arXiv Detail & Related papers (2020-11-13T16:41:55Z) - Multi-Drone based Single Object Tracking with Agent Sharing Network [74.8198920355117]
Multi-Drone single Object Tracking dataset consists of 92 groups of video clips with 113,918 high resolution frames taken by two drones and 63 groups of video clips with 145,875 high resolution frames taken by three drones.
Agent sharing network (ASNet) is proposed by self-supervised template sharing and view-aware fusion of the target from multiple drones.
arXiv Detail & Related papers (2020-03-16T03:27:04Z) - University-1652: A Multi-view Multi-source Benchmark for Drone-based
Geo-localization [87.74121935246937]
We introduce a new multi-view benchmark for drone-based geo-localization, named University-1652.
University-1652 contains data from three platforms, i.e., synthetic drones, satellites and ground cameras of 1,652 university buildings around the world.
Experiments show that University-1652 helps the model to learn the viewpoint-invariant features and also has good generalization ability in the real-world scenario.
arXiv Detail & Related papers (2020-02-27T15:24:15Z) - Detection and Tracking Meet Drones Challenge [131.31749447313197]
This paper presents a review of object detection and tracking datasets and benchmarks, and discusses the challenges of collecting large-scale drone-based object detection and tracking datasets with manual annotations.
We describe our VisDrone dataset, which is captured over various urban/suburban areas of 14 different cities across China from North to South.
We provide a detailed analysis of the current state of the field of large-scale object detection and tracking on drones, and conclude the challenge as well as propose future directions.
arXiv Detail & Related papers (2020-01-16T00:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.