Multi-Stage Fusion Architecture for Small-Drone Localization and Identification Using Passive RF and EO Imagery: A Case Study
- URL: http://arxiv.org/abs/2406.16875v1
- Date: Sat, 30 Mar 2024 22:53:28 GMT
- Title: Multi-Stage Fusion Architecture for Small-Drone Localization and Identification Using Passive RF and EO Imagery: A Case Study
- Authors: Thakshila Wimalajeewa Wewelwala, Thomas W. Tedesso, Tony Davis,
- Abstract summary: This work develops a multi-stage fusion architecture using passive radio frequency (RF) and electro-optic (EO) imagery data.
Supervised deep learning based techniques and unsupervised foreground/background separation techniques are explored to cope with challenging environments.
The proposed fusion architecture is tested and the tracking and performance is quantified over the range.
- Score: 0.1872664641238533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable detection, localization and identification of small drones is essential to promote safe, secure and privacy-respecting operation of Unmanned-Aerial Systems (UAS), or simply, drones. This is an increasingly challenging problem with only single modality sensing, especially, to detect and identify small drones. In this work, a multi-stage fusion architecture using passive radio frequency (RF) and electro-optic (EO) imagery data is developed to leverage the synergies of the modalities to improve the overall tracking and classification capabilities. For detection with EO-imagery, supervised deep learning based techniques as well as unsupervised foreground/background separation techniques are explored to cope with challenging environments. Using real collected data for Group 1 and 2 drones, the capability of each algorithm is quantified. In order to compensate for any performance gaps in detection with only EO imagery as well as to provide a unique device identifier for the drones, passive RF is integrated with EO imagery whenever available. In particular, drone detections in the image plane are combined with passive RF location estimates via detection-to-detection association after 3D to 2D transformation. Final tracking is performed on the composite detections in the 2D image plane. Each track centroid is given a unique identification obtained via RF fingerprinting. The proposed fusion architecture is tested and the tracking and performance is quantified over the range to illustrate the effectiveness of the proposed approaches using simultaneously collected passive RF and EO data at the Air Force Research Laboratory (AFRL) through ESCAPE-21 (Experiments, Scenarios, Concept of Operations, and Prototype Engineering) data collect
Related papers
- DroBoost: An Intelligent Score and Model Boosting Method for Drone Detection [1.2564343689544843]
Drone detection is a challenging object detection task where visibility conditions and quality of the images may be unfavorable.
Our work improves on the previous approach by combining several improvements.
The proposed technique won 1st Place in the Drone vs. Bird Challenge.
arXiv Detail & Related papers (2024-06-30T20:49:56Z) - Joint object detection and re-identification for 3D obstacle
multi-camera systems [47.87501281561605]
This research paper introduces a novel modification to an object detection network that uses camera and lidar information.
It incorporates an additional branch designed for the task of re-identifying objects across adjacent cameras within the same vehicle.
The results underscore the superiority of this method over traditional Non-Maximum Suppression (NMS) techniques.
arXiv Detail & Related papers (2023-10-09T15:16:35Z) - A Two-Dimensional Deep Network for RF-based Drone Detection and
Identification Towards Secure Coverage Extension [7.717171534776764]
We use Short-Time Fourier Transform to extract two-dimensional features from the raw signals, which contain both time-domain and frequency-domain information.
Then, we employ a Convolutional Neural Network (CNN) built with ResNet structure to achieve multi-class classifications.
Our experimental results show that the proposed ResNet-STFT can achieve higher accuracy and faster convergence on the extended dataset.
arXiv Detail & Related papers (2023-08-26T15:43:39Z) - Multi-Modal Domain Fusion for Multi-modal Aerial View Object
Classification [4.438928487047433]
A novel Multi-Modal Domain Fusion(MDF) network is proposed to learn the domain invariant features from multi-modal data.
The network achieves top-10 performance in the Track-1 with an accuracy of 25.3 % and top-5 performance in Track-2 with an accuracy of 34.26 %.
arXiv Detail & Related papers (2022-12-14T05:14:02Z) - Uncertainty Aware Multitask Pyramid Vision Transformer For UAV-Based
Object Re-Identification [38.19907319079833]
We propose a multitask learning approach, which employs a new multiscale architecture without convolution, Pyramid Vision Transformer (PVT) as the backbone for UAV-based object ReID.
By uncertainty modeling of intraclass variations, our proposed model can be jointly optimized using both uncertainty-aware object ID and camera ID information.
arXiv Detail & Related papers (2022-09-19T00:27:07Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - Dogfight: Detecting Drones from Drones Videos [58.158988162743825]
This paper attempts to address the problem of drones detection from other flying drones variations.
The erratic movement of the source and target drones, small size, arbitrary shape, large intensity, and occlusion make this problem quite challenging.
To handle this, instead of using region-proposal based methods, we propose to use a two-stage segmentation-based approach.
arXiv Detail & Related papers (2021-03-31T17:43:31Z) - MRDet: A Multi-Head Network for Accurate Oriented Object Detection in
Aerial Images [51.227489316673484]
We propose an arbitrary-oriented region proposal network (AO-RPN) to generate oriented proposals transformed from horizontal anchors.
To obtain accurate bounding boxes, we decouple the detection task into multiple subtasks and propose a multi-head network.
Each head is specially designed to learn the features optimal for the corresponding task, which allows our network to detect objects accurately.
arXiv Detail & Related papers (2020-12-24T06:36:48Z) - SyNet: An Ensemble Network for Object Detection in UAV Images [13.198689566654107]
In this paper, we propose an ensemble network, SyNet, that combines a multi-stage method with a single-stage one.
As building blocks, CenterNet and Cascade R-CNN with pretrained feature extractors are utilized along with an ensembling strategy.
We report the state of the art results obtained by our proposed solution on two different datasets.
arXiv Detail & Related papers (2020-12-23T21:38:32Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.