Adaptive Deep Learning for Efficient Visual Pose Estimation aboard
Ultra-low-power Nano-drones
- URL: http://arxiv.org/abs/2401.15236v2
- Date: Fri, 23 Feb 2024 15:07:38 GMT
- Title: Adaptive Deep Learning for Efficient Visual Pose Estimation aboard
Ultra-low-power Nano-drones
- Authors: Beatrice Alessandra Motetti, Luca Crupi, Mustafa Omer Mohammed Elamin
Elshaigi, Matteo Risso, Daniele Jahier Pagliari, Daniele Palossi, Alessio
Burrello
- Abstract summary: We present a novel adaptive deep learning-based mechanism for the efficient execution of a vision-based human pose estimation task.
On a real-world dataset and the actual nano-drone hardware, our best-performing system shows 28% latency reduction while keeping the same mean absolute error (MAE), 3% MAE reduction while being iso-latency, and the absolute peak performance, i.e., 6% better than SoA model.
- Score: 5.382126081742012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sub-10cm diameter nano-drones are gaining momentum thanks to their
applicability in scenarios prevented to bigger flying drones, such as in narrow
environments and close to humans. However, their tiny form factor also brings
their major drawback: ultra-constrained memory and processors for the onboard
execution of their perception pipelines. Therefore, lightweight deep
learning-based approaches are becoming increasingly popular, stressing how
computational efficiency and energy-saving are paramount as they can make the
difference between a fully working closed-loop system and a failing one. In
this work, to maximize the exploitation of the ultra-limited resources aboard
nano-drones, we present a novel adaptive deep learning-based mechanism for the
efficient execution of a vision-based human pose estimation task. We leverage
two State-of-the-Art (SoA) convolutional neural networks (CNNs) with different
regression performance vs. computational costs trade-offs. By combining these
CNNs with three novel adaptation strategies based on the output's temporal
consistency and on auxiliary tasks to swap the CNN being executed proactively,
we present six different systems. On a real-world dataset and the actual
nano-drone hardware, our best-performing system, compared to executing only the
bigger and most accurate SoA model, shows 28% latency reduction while keeping
the same mean absolute error (MAE), 3% MAE reduction while being iso-latency,
and the absolute peak performance, i.e., 6% better than SoA model.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Training on the Fly: On-device Self-supervised Learning aboard Nano-drones within 20 mW [52.280742520586756]
Miniaturized cyber-physical systems (CPSes) powered by tiny machine learning (TinyML), such as nano-drones, are becoming an increasingly attractive technology.
Simple electronics make these CPSes inexpensive, but strongly limit the computational, memory, and sensing resources available on board.
We present a novel on-device fine-tuning approach that relies only on the limited ultra-low power resources available aboard nano-drones.
arXiv Detail & Related papers (2024-08-06T13:11:36Z) - Tiny-PULP-Dronets: Squeezing Neural Networks for Faster and Lighter Inference on Multi-Tasking Autonomous Nano-Drones [12.96119439129453]
This work moves from PULP-Dronet, a State-of-the-Art convolutional neural network for autonomous navigation on nano-drones, to Tiny-PULP-Dronet, a novel methodology to squeeze by more than one order of magnitude model size.
This massive reduction paves the way towards affordable multi-tasking on nano-drones, a fundamental requirement for achieving high-level intelligence.
arXiv Detail & Related papers (2024-07-02T16:24:57Z) - On-device Self-supervised Learning of Visual Perception Tasks aboard
Hardware-limited Nano-quadrotors [53.59319391812798]
Sub-SI50gram nano-drones are gaining momentum in both academia and industry.
Their most compelling applications rely on onboard deep learning models for perception.
When deployed in unknown environments, these models often underperform due to domain shift.
We propose for the first time, on-device learning aboard nano-drones, where the first part of the in-field mission is dedicated to self-supervised fine-tuning.
arXiv Detail & Related papers (2024-03-06T22:04:14Z) - Optimized Deployment of Deep Neural Networks for Visual Pose Estimation
on Nano-drones [9.806742394395322]
Miniaturized unmanned aerial vehicles (UAVs) are gaining popularity due to their small size, enabling new tasks such as indoor navigation or people monitoring.
This work proposes a new automatic optimization pipeline for visual pose estimation tasks using Deep Neural Networks (DNNs)
Our results improve the state-of-the-art reducing inference latency by up to 3.22x at iso-error.
arXiv Detail & Related papers (2024-02-23T11:35:57Z) - High-throughput Visual Nano-drone to Nano-drone Relative Localization using Onboard Fully Convolutional Networks [51.23613834703353]
Relative drone-to-drone localization is a fundamental building block for any swarm operations.
We present a vertically integrated system based on a novel vision-based fully convolutional neural network (FCNN)
Our model results in an R-squared improvement from 32 to 47% on the horizontal image coordinate and from 18 to 55% on the vertical image coordinate, on a real-world dataset of 30k images.
arXiv Detail & Related papers (2024-02-21T12:34:31Z) - A3D: Adaptive, Accurate, and Autonomous Navigation for Edge-Assisted
Drones [12.439787085435661]
We propose A3D, an edge server assisted drone navigation framework.
A3D can reduce end-to-end latency by 28.06% and extend the flight distance by up to 27.28% compared with non-adaptive solutions.
arXiv Detail & Related papers (2023-07-19T10:23:28Z) - Deep Neural Network Architecture Search for Accurate Visual Pose
Estimation aboard Nano-UAVs [69.19616451596342]
Miniaturized unmanned aerial vehicles (UAVs) are an emerging and trending topic.
We leverage a novel neural architecture search (NAS) technique to automatically identify several convolutional neural networks (CNNs) for a visual pose estimation task.
Our results improve the State-of-the-Art by reducing the in-field control error of 32% while achieving a real-time onboard inference-rate of 10Hz@10mW and 50Hz@90mW.
arXiv Detail & Related papers (2023-03-03T14:02:09Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.