NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter
- URL: http://arxiv.org/abs/2209.06918v1
- Date: Wed, 14 Sep 2022 20:35:51 GMT
- Title: NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter
- Authors: Rik J. Bouwmeester, Federico Paredes-Vall\'es and Guido C. H. E. de
Croon
- Abstract summary: Nano quadcopters are small, agile, and cheap platforms that are well suited for deployment in narrow, cluttered environments.
Recent machine learning developments promise high-performance perception at low latency.
dedicated edge computing hardware has the potential to augment the processing capabilities of these limited devices.
We present NanoFlowNet, a lightweight convolutional neural network for real-time dense optical flow estimation on edge computing hardware.
- Score: 11.715961583058226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nano quadcopters are small, agile, and cheap platforms that are well suited
for deployment in narrow, cluttered environments. Due to their limited payload,
these vehicles are highly constrained in processing power, rendering
conventional vision-based methods for safe and autonomous navigation
incompatible. Recent machine learning developments promise high-performance
perception at low latency, while dedicated edge computing hardware has the
potential to augment the processing capabilities of these limited devices. In
this work, we present NanoFlowNet, a lightweight convolutional neural network
for real-time dense optical flow estimation on edge computing hardware. We draw
inspiration from recent advances in semantic segmentation for the design of
this network. Additionally, we guide the learning of optical flow using motion
boundary ground truth data, which improves performance with no impact on
latency. Validation results on the MPI-Sintel dataset show the high performance
of the proposed network given its constrained architecture. Additionally, we
successfully demonstrate the capabilities of NanoFlowNet by deploying it on the
ultra-low power GAP8 microprocessor and by applying it to vision-based obstacle
avoidance on board a Bitcraze Crazyflie, a 34 g nano quadcopter.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Optimized Deployment of Deep Neural Networks for Visual Pose Estimation
on Nano-drones [9.806742394395322]
Miniaturized unmanned aerial vehicles (UAVs) are gaining popularity due to their small size, enabling new tasks such as indoor navigation or people monitoring.
This work proposes a new automatic optimization pipeline for visual pose estimation tasks using Deep Neural Networks (DNNs)
Our results improve the state-of-the-art reducing inference latency by up to 3.22x at iso-error.
arXiv Detail & Related papers (2024-02-23T11:35:57Z) - Digital Modeling on Large Kernel Metamaterial Neural Network [7.248553563042369]
We propose a large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN.
With the proposed LMNN, the cost of the convolutional front-end can be offloaded into fabricated optical hardware.
arXiv Detail & Related papers (2023-07-21T19:07:02Z) - Efficient Feature Description for Small Body Relative Navigation using
Binary Convolutional Neural Networks [17.15829643665034]
This paper introduces a novel deep local feature description architecture that leverages binary convolutional neural network layers.
We train and test our models on real images of small bodies from legacy and ongoing missions.
We implement our models onboard a surrogate for the next-generation spacecraft processor and demonstrate feasible runtimes for online feature tracking.
arXiv Detail & Related papers (2023-04-11T05:09:46Z) - Channel-Aware Distillation Transformer for Depth Estimation on Nano
Drones [9.967643080731683]
This paper presents a lightweight CNN depth estimation network deployed on nano drones for obstacle avoidance.
Inspired by Knowledge Distillation (KD), a Channel-Aware Distillation Transformer (CADiT) is proposed to facilitate the small network.
The proposed method is validated on the KITTI dataset and tested on a nano drone Crazyflie, with an ultra-low power microprocessor GAP8.
arXiv Detail & Related papers (2023-03-18T10:45:34Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - EmotionNet Nano: An Efficient Deep Convolutional Neural Network Design
for Real-time Facial Expression Recognition [75.74756992992147]
This study proposes EmotionNet Nano, an efficient deep convolutional neural network created through a human-machine collaborative design strategy.
Two different variants of EmotionNet Nano are presented, each with a different trade-off between architectural and computational complexity and accuracy.
We demonstrate that the proposed EmotionNet Nano networks achieved real-time inference speeds (e.g. $>25$ FPS and $>70$ FPS at 15W and 30W, respectively) and high energy efficiency.
arXiv Detail & Related papers (2020-06-29T00:48:05Z) - DepthNet Nano: A Highly Compact Self-Normalizing Neural Network for
Monocular Depth Estimation [76.90627702089357]
DepthNet Nano is a compact deep neural network for monocular depth estimation designed using a human machine collaborative design strategy.
The proposed DepthNet Nano possesses a highly efficient network architecture, while still achieving comparable performance with state-of-the-art networks.
arXiv Detail & Related papers (2020-04-17T00:41:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.