Event-based Asynchronous Sparse Convolutional Networks
- URL: http://arxiv.org/abs/2003.09148v2
- Date: Fri, 17 Jul 2020 15:52:12 GMT
- Title: Event-based Asynchronous Sparse Convolutional Networks
- Authors: Nico Messikommer, Daniel Gehrig, Antonio Loquercio, Davide Scaramuzza
- Abstract summary: Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
- Score: 54.094244806123235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are bio-inspired sensors that respond to per-pixel brightness
changes in the form of asynchronous and sparse "events". Recently, pattern
recognition algorithms, such as learning-based methods, have made significant
progress with event cameras by converting events into synchronous dense,
image-like representations and applying traditional machine learning methods
developed for standard cameras. However, these approaches discard the spatial
and temporal sparsity inherent in event data at the cost of higher
computational complexity and latency. In this work, we present a general
framework for converting models trained on synchronous image-like event
representations into asynchronous models with identical output, thus directly
leveraging the intrinsic asynchronous and sparse nature of the event data. We
show both theoretically and experimentally that this drastically reduces the
computational complexity and latency of high-capacity, synchronous neural
networks without sacrificing accuracy. In addition, our framework has several
desirable characteristics: (i) it exploits spatio-temporal sparsity of events
explicitly, (ii) it is agnostic to the event representation, network
architecture, and task, and (iii) it does not require any train-time change,
since it is compatible with the standard neural networks' training process. We
thoroughly validate the proposed framework on two computer vision tasks: object
detection and object recognition. In these tasks, we reduce the computational
complexity up to 20 times with respect to high-latency neural networks. At the
same time, we outperform state-of-the-art asynchronous approaches up to 24% in
prediction accuracy.
Related papers
- Event-Stream Super Resolution using Sigma-Delta Neural Network [0.10923877073891444]
Event cameras present unique challenges due to their low resolution and sparse, asynchronous nature of the data they collect.
Current event super-resolution algorithms are not fully optimized for the distinct data structure produced by event cameras.
Research proposes a method that integrates binary spikes with Sigma Delta Neural Networks (SDNNs)
arXiv Detail & Related papers (2024-08-13T15:25:18Z) - A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation [3.355813093377501]
Event cameras operate differently from traditional digital cameras, continuously capturing data and generating binary spikes that encode time, location, and light intensity.
This necessitates the development of innovative, spike-aware algorithms tailored for event cameras.
We propose a purely spike-driven spike transformer network for depth estimation from spiking camera data.
arXiv Detail & Related papers (2024-04-26T11:32:53Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Event-LSTM: An Unsupervised and Asynchronous Learning-based
Representation for Event-based Data [8.931153235278831]
Event cameras are activity-driven bio-inspired vision sensors.
We propose Event-LSTM, an unsupervised Auto-Encoder architecture made up of LSTM layers.
We also push state-of-the-art event de-noising forward by introducing memory into the de-noising process.
arXiv Detail & Related papers (2021-05-10T09:18:52Z) - Combining Events and Frames using Recurrent Asynchronous Multimodal
Networks for Monocular Depth Prediction [51.072733683919246]
We introduce Recurrent Asynchronous Multimodal (RAM) networks to handle asynchronous and irregular data from multiple sensors.
Inspired by traditional RNNs, RAM networks maintain a hidden state that is updated asynchronously and can be queried at any time to generate a prediction.
We show an improvement over state-of-the-art methods by up to 30% in terms of mean depth absolute error.
arXiv Detail & Related papers (2021-02-18T13:24:35Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.