Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors
- URL: http://arxiv.org/abs/2303.01940v1
- Date: Fri, 3 Mar 2023 14:14:08 GMT
- Title: Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors
- Authors: Stefano Bonato, Stefano Carlo Lambertenghi, Elia Cereda, Alessandro
Giusti, Daniele Palossi
- Abstract summary: This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
- Score: 64.68349896377629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise relative localization is a crucial functional block for swarm
robotics. This work presents a novel autonomous end-to-end system that
addresses the monocular relative localization, through deep neural networks
(DNNs), of two peer nano-drones, i.e., sub-40g of weight and sub-100mW
processing power. To cope with the ultra-constrained nano-drone platform, we
propose a vertically-integrated framework, from the dataset collection to the
final in-field deployment, including dataset augmentation, quantization, and
system optimizations. Experimental results show that our DNN can precisely
localize a 10cm-size target nano-drone by employing only low-resolution
monochrome images, up to ~2m distance. On a disjoint testing dataset our model
yields a mean R2 score of 0.42 and a root mean square error of 18cm, which
results in a mean in-field prediction error of 15cm and in a closed-loop
control error of 17cm, over a ~60s-flight test. Ultimately, the proposed system
improves the State-of-the-Art by showing long-endurance tracking performance
(up to 2min continuous tracking), generalization capabilities being deployed in
a never-seen-before environment, and requiring a minimal power consumption of
95mW for an onboard real-time inference-rate of 48Hz.
Related papers
- Training on the Fly: On-device Self-supervised Learning aboard Nano-drones within 20 mW [52.280742520586756]
Miniaturized cyber-physical systems (CPSes) powered by tiny machine learning (TinyML), such as nano-drones, are becoming an increasingly attractive technology.
Simple electronics make these CPSes inexpensive, but strongly limit the computational, memory, and sensing resources available on board.
We present a novel on-device fine-tuning approach that relies only on the limited ultra-low power resources available aboard nano-drones.
arXiv Detail & Related papers (2024-08-06T13:11:36Z) - On-device Self-supervised Learning of Visual Perception Tasks aboard
Hardware-limited Nano-quadrotors [53.59319391812798]
Sub-SI50gram nano-drones are gaining momentum in both academia and industry.
Their most compelling applications rely on onboard deep learning models for perception.
When deployed in unknown environments, these models often underperform due to domain shift.
We propose for the first time, on-device learning aboard nano-drones, where the first part of the in-field mission is dedicated to self-supervised fine-tuning.
arXiv Detail & Related papers (2024-03-06T22:04:14Z) - Optimized Deployment of Deep Neural Networks for Visual Pose Estimation
on Nano-drones [9.806742394395322]
Miniaturized unmanned aerial vehicles (UAVs) are gaining popularity due to their small size, enabling new tasks such as indoor navigation or people monitoring.
This work proposes a new automatic optimization pipeline for visual pose estimation tasks using Deep Neural Networks (DNNs)
Our results improve the state-of-the-art reducing inference latency by up to 3.22x at iso-error.
arXiv Detail & Related papers (2024-02-23T11:35:57Z) - High-throughput Visual Nano-drone to Nano-drone Relative Localization using Onboard Fully Convolutional Networks [51.23613834703353]
Relative drone-to-drone localization is a fundamental building block for any swarm operations.
We present a vertically integrated system based on a novel vision-based fully convolutional neural network (FCNN)
Our model results in an R-squared improvement from 32 to 47% on the horizontal image coordinate and from 18 to 55% on the vertical image coordinate, on a real-world dataset of 30k images.
arXiv Detail & Related papers (2024-02-21T12:34:31Z) - Semantic Segmentation in Satellite Hyperspectral Imagery by Deep Learning [54.094272065609815]
We propose a lightweight 1D-CNN model, 1D-Justo-LiuNet, which outperforms state-of-the-art models in the hypespectral domain.
1D-Justo-LiuNet achieves the highest accuracy (0.93) with the smallest model size (4,563 parameters) among all tested models.
arXiv Detail & Related papers (2023-10-24T21:57:59Z) - Real-time Monocular Depth Estimation on Embedded Systems [32.40848141360501]
Two efficient RT-MonoDepth and RT-MonoDepth-S architectures are proposed.
RT-MonoDepth and RT-MonoDepth-S achieve frame rates of 18.4&30.5 FPS on NVIDIA Jetson Nano and 253.0&364.1 FPS on Jetson AGX Orin.
arXiv Detail & Related papers (2023-08-21T08:59:59Z) - Real-Time Monocular Human Depth Estimation and Segmentation on Embedded
Systems [13.490605853268837]
Estimating a scene's depth to achieve collision avoidance against moving pedestrians is a crucial and fundamental problem in the robotic field.
This paper proposes a novel, low complexity network architecture for fast and accurate human depth estimation and segmentation in indoor environments.
arXiv Detail & Related papers (2021-08-24T03:26:08Z) - Planetary UAV localization based on Multi-modal Registration with
Pre-existing Digital Terrain Model [0.5156484100374058]
We propose a multi-modal registration based SLAM algorithm, which estimates the location of a planet UAV using a nadir view camera on the UAV.
To overcome the scale and appearance difference between on-board UAV images and pre-installed digital terrain model, a theoretical model is proposed to prove that topographic features of UAV image and DEM can be correlated in frequency domain via cross power spectrum.
To test the robustness and effectiveness of the proposed localization algorithm, a new cross-source drone-based localization dataset for planetary exploration is proposed.
arXiv Detail & Related papers (2021-06-24T02:54:01Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.