ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing
UAV-Platform with Event-Based and Frame-Based Cameras
- URL: http://arxiv.org/abs/2305.18371v1
- Date: Sat, 27 May 2023 23:08:22 GMT
- Title: ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing
UAV-Platform with Event-Based and Frame-Based Cameras
- Authors: Sizhen Bian, Lukas Schulthess, Georg Rutishauser, Alfio Di Mauro, Luca
Benini, Michele Magno
- Abstract summary: ColibriUAV is a UAV platform with both frame-based and event-based cameras interfaces.
Kraken is capable of efficiently processing both event data from a DVS camera and frame data from an RGB camera.
This paper benchmarks the end-to-end latency and power efficiency of the neuromorphic and event-based UAV subsystem.
- Score: 14.24529561007139
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The interest in dynamic vision sensor (DVS)-powered unmanned aerial vehicles
(UAV) is raising, especially due to the microsecond-level reaction time of the
bio-inspired event sensor, which increases robustness and reduces latency of
the perception tasks compared to a RGB camera. This work presents ColibriUAV, a
UAV platform with both frame-based and event-based cameras interfaces for
efficient perception and near-sensor processing. The proposed platform is
designed around Kraken, a novel low-power RISC-V System on Chip with two
hardware accelerators targeting spiking neural networks and deep ternary neural
networks.Kraken is capable of efficiently processing both event data from a DVS
camera and frame data from an RGB camera. A key feature of Kraken is its
integrated, dedicated interface with a DVS camera. This paper benchmarks the
end-to-end latency and power efficiency of the neuromorphic and event-based UAV
subsystem, demonstrating state-of-the-art event data with a throughput of 7200
frames of events per second and a power consumption of 10.7 \si{\milli\watt},
which is over 6.6 times faster and a hundred times less power-consuming than
the widely-used data reading approach through the USB interface. The overall
sensing and processing power consumption is below 50 mW, achieving latency in
the milliseconds range, making the platform suitable for low-latency autonomous
nano-drones as well.
Related papers
- A 96pJ/Frame/Pixel and 61pJ/Event Anti-UAV System with Hybrid Object Tracking Modes [5.593237736175593]
We present an energy-efficient anti-UAV system that integrates frame-based and event-driven object tracking.<n>The 2 mm2 chip achieves 96 pJ per frame per pixel and 61 pJ per event at 0.8 V, and reaches 98.2 percent recognition accuracy on public UAV datasets.
arXiv Detail & Related papers (2025-12-12T13:53:38Z) - Event-Based Visual Teach-and-Repeat via Fast Fourier-Domain Cross-Correlation [52.46888249268445]
We present the first event-camera-based visual teach-and-repeat system.<n>We develop a frequency-domain cross-correlation framework that transforms the event stream matching problem into computationally efficient space multiplications.<n>Experiments using a Prophesee EVK4 HD event camera mounted on an AgileX Scout Mini robot demonstrate successful autonomous navigation.
arXiv Detail & Related papers (2025-09-21T23:53:31Z) - EA: An Event Autoencoder for High-Speed Vision Sensing [0.9401004127785267]
Event cameras offer a promising alternative but pose challenges in object detection due to sparse and noisy event streams.<n>We propose an event autoencoder architecture that efficiently compresses and reconstructs event data.<n>We show that our approach achieves comparable accuracy to the YOLO-v4 model while utilizing up to $35.5times$ fewer parameters.
arXiv Detail & Related papers (2025-07-09T00:21:15Z) - Low Latency Visual Inertial Odometry with On-Sensor Accelerated Optical Flow for Resource-Constrained UAVs [13.037162115493393]
On-sensor hardware acceleration is a promising approach to enable low latency Visual Inertial Odometry (VIO)
This paper assesses the speed-up in a VIO sensor system exploiting a compact OF sensor consisting of a global shutter camera and an Application Specific Integrated Circuit (ASIC)
By replacing the feature tracking logic of the VINS-Mono pipeline with data from this OF camera, we demonstrate a 49.4% reduction in latency and a 53.7% reduction of compute load of the VIO pipeline over the original VINS-Mono implementation.
arXiv Detail & Related papers (2024-06-19T08:51:19Z) - Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors [6.03212980984729]
Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications.
prevention and detection of UAVs are pivotal to guarantee confidentiality and safety.
This paper presents F-UAV-D (Fast Unmanned Aerial Vehicle Detector), an embedded system that enables fast-moving drone detection.
arXiv Detail & Related papers (2024-03-18T15:27:58Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Recurrent Vision Transformers for Object Detection with Event Cameras [62.27246562304705]
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras.
RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection.
Our study brings new insights into effective design choices that can be fruitful for research beyond event-based vision.
arXiv Detail & Related papers (2022-12-11T20:28:59Z) - Sparse Compressed Spiking Neural Network Accelerator for Object
Detection [0.1246030133914898]
Spiking neural networks (SNNs) are inspired by the human brain and transmit binary spikes and highly sparse activation maps.
This paper proposes a sparse compressed spiking neural network accelerator that takes advantage of the high sparsity of activation maps and weights.
The experimental result of the neural network shows 71.5$%$ mAP with mixed (1,3) time steps on the IVS 3cls dataset.
arXiv Detail & Related papers (2022-05-02T09:56:55Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Fast Motion Understanding with Spatiotemporal Neural Networks and
Dynamic Vision Sensors [99.94079901071163]
This paper presents a Dynamic Vision Sensor (DVS) based system for reasoning about high speed motion.
We consider the case of a robot at rest reacting to a small, fast approaching object at speeds higher than 15m/s.
We highlight the results of our system to a toy dart moving at 23.4m/s with a 24.73deg error in $theta$, 18.4mm average discretized radius prediction error, and 25.03% median time to collision prediction error.
arXiv Detail & Related papers (2020-11-18T17:55:07Z) - RGB-D-E: Event Camera Calibration for Fast 6-DOF Object Tracking [16.06615504110132]
We propose to use an event-based camera to increase the speed of 3D object tracking in 6 degrees of freedom.
This application requires handling very high object speed to convey compelling AR experiences.
We develop a deep learning approach, which combines an existing RGB-D network along with a novel event-based network in a cascade fashion.
arXiv Detail & Related papers (2020-06-09T01:55:48Z) - EBBINNOT: A Hardware Efficient Hybrid Event-Frame Tracker for Stationary
Dynamic Vision Sensors [5.674895233111088]
This paper presents a hybrid event-frame approach for detecting and tracking objects recorded by a stationary neuromorphic sensor.
To exploit the background removal property of a static DVS, we propose an event-based binary image creation that signals presence or absence of events in a frame duration.
This is the first time a stationary DVS based traffic monitoring solution is extensively compared to simultaneously recorded RGB frame-based methods.
arXiv Detail & Related papers (2020-05-31T03:01:35Z) - Near-chip Dynamic Vision Filtering for Low-Bandwidth Pedestrian
Detection [99.94079901071163]
This paper presents a novel end-to-end system for pedestrian detection using Dynamic Vision Sensors (DVSs)
We target applications where multiple sensors transmit data to a local processing unit, which executes a detection algorithm.
Our detector is able to perform a detection every 450 ms, with an overall testing F1 score of 83%.
arXiv Detail & Related papers (2020-04-03T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.