AEGNN: Asynchronous Event-based Graph Neural Networks
- URL: http://arxiv.org/abs/2203.17149v1
- Date: Thu, 31 Mar 2022 16:21:12 GMT
- Title: AEGNN: Asynchronous Event-based Graph Neural Networks
- Authors: Simon Schaefer, Daniel Gehrig and Davide Scaramuzza
- Abstract summary: Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
- Score: 54.528926463775946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The best performing learning algorithms devised for event cameras work by
first converting events into dense representations that are then processed
using standard CNNs. However, these steps discard both the sparsity and high
temporal resolution of events, leading to high computational burden and
latency. For this reason, recent works have adopted Graph Neural Networks
(GNNs), which process events as "static" spatio-temporal graphs, which are
inherently "sparse". We take this trend one step further by introducing
Asynchronous, Event-based Graph Neural Networks (AEGNNs), a novel
event-processing paradigm that generalizes standard GNNs to process events as
"evolving" spatio-temporal graphs. AEGNNs follow efficient update rules that
restrict recomputation of network activations only to the nodes affected by
each new event, thereby significantly reducing both computation and latency for
event-by-event processing. AEGNNs are easily trained on synchronous inputs and
can be converted to efficient, "asynchronous" networks at test time. We
thoroughly validate our method on object classification and detection tasks,
where we show an up to a 200-fold reduction in computational complexity
(FLOPs), with similar or even better performance than state-of-the-art
asynchronous methods. This reduction in computation directly translates to an
8-fold reduction in computational latency when compared to standard GNNs, which
opens the door to low-latency event-based processing.
Related papers
- Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Cached Operator Reordering: A Unified View for Fast GNN Training [24.917363701638607]
Graph Neural Networks (GNNs) are a powerful tool for handling structured graph data and addressing tasks such as node classification, graph classification, and clustering.
However, the sparse nature of GNN computation poses new challenges for performance optimization compared to traditional deep neural networks.
We address these challenges by providing a unified view of GNN computation, I/O, and memory.
arXiv Detail & Related papers (2023-08-23T12:27:55Z) - Asynchronous Algorithmic Alignment with Cocycles [22.993659485873245]
State-of-the-art neural algorithmic reasoners make use of message passing in graph neural networks (GNNs)
Typical GNNs blur the distinction between the definition and invocation of the message function, forcing a node to send messages to its neighbours at every layer, synchronously.
In this work, we explicitly separate the concepts of node state update and message function invocation.
arXiv Detail & Related papers (2023-06-27T17:13:20Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Distributed Graph Neural Network Training with Periodic Historical
Embedding Synchronization [9.503080586294406]
Graph Neural Networks (GNNs) are prevalent in various applications such as social network, recommender systems, and knowledge graphs.
Traditional sampling-based methods accelerate GNN by dropping edges and nodes, which impairs the graph integrity and model performance.
This paper proposes DIstributed Graph Embedding SynchronizaTion (DIGEST), a novel distributed GNN training framework.
arXiv Detail & Related papers (2022-05-31T18:44:53Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.