High-throughput Visual Nano-drone to Nano-drone Relative Localization using Onboard Fully Convolutional Networks
- URL: http://arxiv.org/abs/2402.13756v3
- Date: Wed, 17 Apr 2024 13:32:15 GMT
- Title: High-throughput Visual Nano-drone to Nano-drone Relative Localization using Onboard Fully Convolutional Networks
- Authors: Luca Crupi, Alessandro Giusti, Daniele Palossi,
- Abstract summary: Relative drone-to-drone localization is a fundamental building block for any swarm operations.
We present a vertically integrated system based on a novel vision-based fully convolutional neural network (FCNN)
Our model results in an R-squared improvement from 32 to 47% on the horizontal image coordinate and from 18 to 55% on the vertical image coordinate, on a real-world dataset of 30k images.
- Score: 51.23613834703353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relative drone-to-drone localization is a fundamental building block for any swarm operations. We address this task in the context of miniaturized nano-drones, i.e., 10cm in diameter, which show an ever-growing interest due to novel use cases enabled by their reduced form factor. The price for their versatility comes with limited onboard resources, i.e., sensors, processing units, and memory, which limits the complexity of the onboard algorithms. A traditional solution to overcome these limitations is represented by lightweight deep learning models directly deployed aboard nano-drones. This work tackles the challenging relative pose estimation between nano-drones using only a gray-scale low-resolution camera and an ultra-low-power System-on-Chip (SoC) hosted onboard. We present a vertically integrated system based on a novel vision-based fully convolutional neural network (FCNN), which runs at 39Hz within 101mW onboard a Crazyflie nano-drone extended with the GWT GAP8 SoC. We compare our FCNN against three State-of-the-Art (SoA) systems. Considering the best-performing SoA approach, our model results in an R-squared improvement from 32 to 47% on the horizontal image coordinate and from 18 to 55% on the vertical image coordinate, on a real-world dataset of 30k images. Finally, our in-field tests show a reduction of the average tracking error of 37% compared to a previous SoA work and an endurance performance up to the entire battery lifetime of 4 minutes.
Related papers
- Training on the Fly: On-device Self-supervised Learning aboard Nano-drones within 20 mW [52.280742520586756]
Miniaturized cyber-physical systems (CPSes) powered by tiny machine learning (TinyML), such as nano-drones, are becoming an increasingly attractive technology.
Simple electronics make these CPSes inexpensive, but strongly limit the computational, memory, and sensing resources available on board.
We present a novel on-device fine-tuning approach that relies only on the limited ultra-low power resources available aboard nano-drones.
arXiv Detail & Related papers (2024-08-06T13:11:36Z) - Tiny-PULP-Dronets: Squeezing Neural Networks for Faster and Lighter Inference on Multi-Tasking Autonomous Nano-Drones [12.96119439129453]
This work moves from PULP-Dronet, a State-of-the-Art convolutional neural network for autonomous navigation on nano-drones, to Tiny-PULP-Dronet, a novel methodology to squeeze by more than one order of magnitude model size.
This massive reduction paves the way towards affordable multi-tasking on nano-drones, a fundamental requirement for achieving high-level intelligence.
arXiv Detail & Related papers (2024-07-02T16:24:57Z) - On-device Self-supervised Learning of Visual Perception Tasks aboard
Hardware-limited Nano-quadrotors [53.59319391812798]
Sub-SI50gram nano-drones are gaining momentum in both academia and industry.
Their most compelling applications rely on onboard deep learning models for perception.
When deployed in unknown environments, these models often underperform due to domain shift.
We propose for the first time, on-device learning aboard nano-drones, where the first part of the in-field mission is dedicated to self-supervised fine-tuning.
arXiv Detail & Related papers (2024-03-06T22:04:14Z) - Adaptive Deep Learning for Efficient Visual Pose Estimation aboard
Ultra-low-power Nano-drones [5.382126081742012]
We present a novel adaptive deep learning-based mechanism for the efficient execution of a vision-based human pose estimation task.
On a real-world dataset and the actual nano-drone hardware, our best-performing system shows 28% latency reduction while keeping the same mean absolute error (MAE), 3% MAE reduction while being iso-latency, and the absolute peak performance, i.e., 6% better than SoA model.
arXiv Detail & Related papers (2024-01-26T23:04:26Z) - A3D: Adaptive, Accurate, and Autonomous Navigation for Edge-Assisted
Drones [12.439787085435661]
We propose A3D, an edge server assisted drone navigation framework.
A3D can reduce end-to-end latency by 28.06% and extend the flight distance by up to 27.28% compared with non-adaptive solutions.
arXiv Detail & Related papers (2023-07-19T10:23:28Z) - Channel-Aware Distillation Transformer for Depth Estimation on Nano
Drones [9.967643080731683]
This paper presents a lightweight CNN depth estimation network deployed on nano drones for obstacle avoidance.
Inspired by Knowledge Distillation (KD), a Channel-Aware Distillation Transformer (CADiT) is proposed to facilitate the small network.
The proposed method is validated on the KITTI dataset and tested on a nano drone Crazyflie, with an ultra-low power microprocessor GAP8.
arXiv Detail & Related papers (2023-03-18T10:45:34Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - Deep Neural Network Architecture Search for Accurate Visual Pose
Estimation aboard Nano-UAVs [69.19616451596342]
Miniaturized unmanned aerial vehicles (UAVs) are an emerging and trending topic.
We leverage a novel neural architecture search (NAS) technique to automatically identify several convolutional neural networks (CNNs) for a visual pose estimation task.
Our results improve the State-of-the-Art by reducing the in-field control error of 32% while achieving a real-time onboard inference-rate of 10Hz@10mW and 50Hz@90mW.
arXiv Detail & Related papers (2023-03-03T14:02:09Z) - TransVisDrone: Spatio-Temporal Transformer for Vision-based
Drone-to-Drone Detection in Aerial Videos [57.92385818430939]
Drone-to-drone detection using visual feed has crucial applications, such as detecting drone collisions, detecting drone attacks, or coordinating flight with other drones.
Existing methods are computationally costly, follow non-end-to-end optimization, and have complex multi-stage pipelines, making them less suitable for real-time deployment on edge devices.
We propose a simple yet effective framework, itTransVisDrone, that provides an end-to-end solution with higher computational efficiency.
arXiv Detail & Related papers (2022-10-16T03:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.