Graph-based Asynchronous Event Processing for Rapid Object Recognition
- URL: http://arxiv.org/abs/2308.14419v1
- Date: Mon, 28 Aug 2023 08:59:57 GMT
- Title: Graph-based Asynchronous Event Processing for Rapid Object Recognition
- Authors: Yijin Li, Han Zhou, Bangbang Yang, Ye Zhang, Zhaopeng Cui, Hujun Bao,
Guofeng Zhang
- Abstract summary: Event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes.
We introduce a novel graph-based framework for event cameras, namely SlideGCN.
Our approach can efficiently process data event-by-event, unlock the low latency nature of events data while still maintaining the graph's structure internally.
- Score: 59.112755601918074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Different from traditional video cameras, event cameras capture asynchronous
events stream in which each event encodes pixel location, trigger time, and the
polarity of the brightness changes. In this paper, we introduce a novel
graph-based framework for event cameras, namely SlideGCN. Unlike some recent
graph-based methods that use groups of events as input, our approach can
efficiently process data event-by-event, unlock the low latency nature of
events data while still maintaining the graph's structure internally. For fast
graph construction, we develop a radius search algorithm, which better exploits
the partial regular structure of event cloud against k-d tree based generic
methods. Experiments show that our method reduces the computational complexity
up to 100 times with respect to current graph-based methods while keeping
state-of-the-art performance on object recognition. Moreover, we verify the
superiority of event-wise processing with our method. When the state becomes
stable, we can give a prediction with high confidence, thus making an early
recognition. Project page: \url{https://zju3dv.github.io/slide_gcn/}.
Related papers
- Representation Learning on Event Stream via an Elastic Net-incorporated
Tensor Network [1.9515859963221267]
We present a novel representation method which can capture global correlations of all events in the event stream simultaneously.
Our method can achieve effective results in applications like filtering noise compared with the state-of-the-art methods.
arXiv Detail & Related papers (2024-01-16T02:51:47Z) - Neuromorphic Imaging and Classification with Graph Learning [11.882239213276392]
Bio-inspired neuromorphic cameras asynchronously record pixel brightness changes and generate sparse event streams.
Due to the multidimensional address-event structure, most existing vision algorithms cannot properly handle asynchronous event streams.
We propose a new graph representation of the event data and couple it with a Graph Transformer to perform accurate neuromorphic classification.
arXiv Detail & Related papers (2023-09-27T12:58:18Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.