COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional
Networks
- URL: http://arxiv.org/abs/2205.07311v1
- Date: Sun, 15 May 2022 15:29:42 GMT
- Title: COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional
Networks
- Authors: Sumit K. Mandal, Gokul Krishnan, A. Alper Goksoy, Gopikrishnan
Ravindran Nair, Yu Cao, Umit Y. Ogras
- Abstract summary: Graph convolutional networks (GCNs) have shown remarkable learning capabilities when processing graph-structured data.
This paper presents a communication-aware in-memory computing architecture (COIN) for GCN hardware acceleration.
- Score: 2.620532065450903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph convolutional networks (GCNs) have shown remarkable learning
capabilities when processing graph-structured data found inherently in many
application areas. GCNs distribute the outputs of neural networks embedded in
each vertex over multiple iterations to take advantage of the relations
captured by the underlying graphs. Consequently, they incur a significant
amount of computation and irregular communication overheads, which call for
GCN-specific hardware accelerators. To this end, this paper presents a
communication-aware in-memory computing architecture (COIN) for GCN hardware
acceleration. Besides accelerating the computation using custom compute
elements (CE) and in-memory computing, COIN aims at minimizing the intra- and
inter-CE communication in GCN operations to optimize the performance and energy
efficiency. Experimental evaluations with widely used datasets show up to 105x
improvement in energy consumption compared to state-of-the-art GCN accelerator.
Related papers
- Efficient Message Passing Architecture for GCN Training on HBM-based FPGAs with Orthogonal Topology On-Chip Networks [0.0]
Graph Convolutional Networks (GCNs) are state-of-the-art deep learning models for representation learning on graphs.
We propose a message-passing architecture that leverages NUMA-based memory access properties.
We also re-engineered the backpropagation algorithm specific to GCNs within our proposed accelerator.
arXiv Detail & Related papers (2024-11-06T12:00:51Z) - MassiveGNN: Efficient Training via Prefetching for Massively Connected Distributed Graphs [11.026326555186333]
This paper develops a parameterized continuous prefetch and eviction scheme on top of the state-of-the-art Amazon DistDGL distributed GNN framework.
It demonstrates about 15-40% improvement in end-to-end training performance on the National Energy Research Scientific Computing Center's (NERSC) Perlmutter supercomputer.
arXiv Detail & Related papers (2024-10-30T05:10:38Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for
Graph Similarity Computation [7.54579279348595]
We propose a flexible architecture called SPA-GCN for accelerating Graph Convolutional Networks (GCN) on graphs.
We show that SPA-GCN can deliver a high speedup compared to a multi-core CPU implementation and a GPU implementation.
arXiv Detail & Related papers (2021-11-10T20:47:57Z) - GNNIE: GNN Inference Engine with Load-balancing and Graph-Specific
Caching [2.654276707313136]
GNNIE is an accelerator designed to run a broad range of Graph Neural Networks (GNNs)
It tackles workload imbalance by (i) splitting node feature operands into blocks, (ii) reordering and redistributing computations, and (iii) using a flexible MAC architecture with low communication overheads among the processing elements.
GNNIE achieves average speedups of over 8890x over a CPU and 295x over a GPU over multiple datasets on graph attention networks (GATs), graph convolutional networks (GCNs), GraphSAGE, GINConv, and DiffPool.
arXiv Detail & Related papers (2021-05-21T20:07:14Z) - DistGNN: Scalable Distributed Training for Large-Scale Graph Neural
Networks [58.48833325238537]
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible.
In this paper, we presentGNN that optimize the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters.
Our results on four common GNN benchmark datasets show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets.
arXiv Detail & Related papers (2021-04-14T08:46:35Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - ItNet: iterative neural networks with small graphs for accurate and
efficient anytime prediction [1.52292571922932]
In this study, we introduce a class of network models that have a small memory footprint in terms of their computational graphs.
We show state-of-the-art results for semantic segmentation on the CamVid and Cityscapes datasets.
arXiv Detail & Related papers (2021-01-21T15:56:29Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z) - Cross-GCN: Enhancing Graph Convolutional Network with $k$-Order Feature
Interactions [153.6357310444093]
Graph Convolutional Network (GCN) is an emerging technique that performs learning and reasoning on graph data.
We argue that existing designs of GCN forgo modeling cross features, making GCN less effective for tasks or data where cross features are important.
We design a new operator named Cross-feature Graph Convolution, which explicitly models the arbitrary-order cross features with complexity linear to feature dimension and order size.
arXiv Detail & Related papers (2020-03-05T13:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.