An Autonomous Drone Swarm for Detecting and Tracking Anomalies among Dense Vegetation
- URL: http://arxiv.org/abs/2407.10754v1
- Date: Mon, 15 Jul 2024 14:31:21 GMT
- Title: An Autonomous Drone Swarm for Detecting and Tracking Anomalies among Dense Vegetation
- Authors: Rakesh John Amala Arokia Nathan, Sigrid Strand, Daniel Mehrwald, Dmitriy Shutin, Oliver Bimber,
- Abstract summary: We show that swarms of drones can detect and track heavily occluded targets practically feasible.
In our real-life field experiments with a swarm of six drones, we achieved an average positional accuracy of 0.39 m with an average precision of 93.2%.
We show that sensor noise can effectively be included in the synthetic aperture image integration process.
- Score: 3.6394530599964026
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Swarms of drones offer an increased sensing aperture, and having them mimic behaviors of natural swarms enhances sampling by adapting the aperture to local conditions. We demonstrate that such an approach makes detecting and tracking heavily occluded targets practically feasible. While object classification applied to conventional aerial images generalizes poorly the randomness of occlusion and is therefore inefficient even under lightly occluded conditions, anomaly detection applied to synthetic aperture integral images is robust for dense vegetation, such as forests, and is independent of pre-trained classes. Our autonomous swarm searches the environment for occurrences of the unknown or unexpected, tracking them while continuously adapting its sampling pattern to optimize for local viewing conditions. In our real-life field experiments with a swarm of six drones, we achieved an average positional accuracy of 0.39 m with an average precision of 93.2% and an average recall of 95.9%. Here, adapted particle swarm optimization considers detection confidences and predicted target appearance. We show that sensor noise can effectively be included in the synthetic aperture image integration process, removing the need for a computationally costly optimization of high-dimensional parameter spaces. Finally, we present a complete hard- and software framework that supports low-latency transmission (approx. 80 ms round-trip time) and fast processing (approx. 600 ms per formation step) of extensive (70-120 Mbit/s) video and telemetry data, and swarm control for swarms of up to ten drones.
Related papers
- Is That Rain? Understanding Effects on Visual Odometry Performance for Autonomous UAVs and Efficient DNN-based Rain Classification at the Edge [1.8936798735951972]
State-of-the-art local tracking and trajectory planning are typically performed using camera sensor input to the flight control algorithm.
We show that a worst-case average tracking error of 1.5 m is possible for a state-of-the-art visual odometry system.
We train a set of deep neural network models suited to mobile and constrained deployment scenarios to determine the extent to which it may be possible to efficiently and accurately classify these rainy' conditions.
arXiv Detail & Related papers (2024-07-17T15:47:25Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Enhancing Lidar-based Object Detection in Adverse Weather using Offset
Sequences in Time [1.1725016312484975]
Lidar-based object detection is significantly affected by adverse weather conditions such as rain and fog.
Our research provides a comprehensive study of effective methods for mitigating the effects of adverse weather on the reliability of lidar-based object detection.
arXiv Detail & Related papers (2024-01-17T08:31:58Z) - Intelligent Anomaly Detection for Lane Rendering Using Transformer with Self-Supervised Pre-Training and Customized Fine-Tuning [8.042684255871707]
This paper transforms lane rendering image anomaly detection into a classification problem.
It proposes a four-phase pipeline consisting of data pre-processing, self-supervised pre-training with the masked image modeling (MiM) method, customized fine-tuning using cross-entropy based loss with label smoothing, and post-processing.
Results indicate that the proposed pipeline exhibits superior performance in lane rendering image anomaly detection.
arXiv Detail & Related papers (2023-12-07T16:10:10Z) - Synthetic Aperture Sensing for Occlusion Removal with Drone Swarms [4.640835690336653]
We demonstrate how efficient autonomous drone swarms can be in detecting and tracking occluded targets in densely forested areas.
Exploration and optimization of local viewing conditions, such as occlusion density and target view obliqueness, provide much faster and much more reliable results than previous, blind sampling strategies.
arXiv Detail & Related papers (2022-12-30T13:19:15Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - 6D Camera Relocalization in Visually Ambiguous Extreme Environments [79.68352435957266]
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
arXiv Detail & Related papers (2022-07-13T16:40:02Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Neural Network Virtual Sensors for Fuel Injection Quantities with
Provable Performance Specifications [71.1911136637719]
We show how provable guarantees can be naturally applied to other real world settings.
We show how specific intervals of fuel injection quantities can be targeted to maximize robustness for certain ranges.
arXiv Detail & Related papers (2020-06-30T23:33:17Z) - Leveraging Uncertainties for Deep Multi-modal Object Detection in
Autonomous Driving [12.310862288230075]
This work presents a probabilistic deep neural network that combines LiDAR point clouds and RGB camera images for robust, accurate 3D object detection.
We explicitly model uncertainties in the classification and regression tasks, and leverage uncertainties to train the fusion network via a sampling mechanism.
arXiv Detail & Related papers (2020-02-01T14:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.