SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for
Graph Similarity Computation
- URL: http://arxiv.org/abs/2111.05936v1
- Date: Wed, 10 Nov 2021 20:47:57 GMT
- Title: SPA-GCN: Efficient and Flexible GCN Accelerator with an Application for
Graph Similarity Computation
- Authors: Atefeh Sohrabizadeh, Yuze Chi, Jason Cong
- Abstract summary: We propose a flexible architecture called SPA-GCN for accelerating Graph Convolutional Networks (GCN) on graphs.
We show that SPA-GCN can deliver a high speedup compared to a multi-core CPU implementation and a GPU implementation.
- Score: 7.54579279348595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While there have been many studies on hardware acceleration for deep learning
on images, there has been a rather limited focus on accelerating deep learning
applications involving graphs. The unique characteristics of graphs, such as
the irregular memory access and dynamic parallelism, impose several challenges
when the algorithm is mapped to a CPU or GPU. To address these challenges while
exploiting all the available sparsity, we propose a flexible architecture
called SPA-GCN for accelerating Graph Convolutional Networks (GCN), the core
computation unit in deep learning algorithms on graphs. The architecture is
specialized for dealing with many small graphs since the graph size has a
significant impact on design considerations. In this context, we use SimGNN, a
neural-network-based graph matching algorithm, as a case study to demonstrate
the effectiveness of our architecture. The experimental results demonstrate
that SPA-GCN can deliver a high speedup compared to a multi-core CPU
implementation and a GPU implementation, showing the efficiency of our design.
Related papers
- Graph Retention Networks for Dynamic Graphs [4.4053348026380235]
We propose Graph Retention Network as a unified architecture for deep learning on dynamic graphs.
The GRN extends the core computational manner of retention to dynamic graph data as graph retention.
Experiments conducted on benchmark datasets present the superior performance of the GRN in both edge-level prediction and node-level classification tasks.
arXiv Detail & Related papers (2024-11-18T03:28:11Z) - Efficient Message Passing Architecture for GCN Training on HBM-based FPGAs with Orthogonal Topology On-Chip Networks [0.0]
Graph Convolutional Networks (GCNs) are state-of-the-art deep learning models for representation learning on graphs.
We propose a message-passing architecture that leverages NUMA-based memory access properties.
We also re-engineered the backpropagation algorithm specific to GCNs within our proposed accelerator.
arXiv Detail & Related papers (2024-11-06T12:00:51Z) - SiHGNN: Leveraging Properties of Semantic Graphs for Efficient HGNN Acceleration [9.85638913900595]
Heterogeneous Graph Neural Networks (HGNNs) have expanded graph representation learning to heterogeneous graph fields.
Recent studies have demonstrated their superior performance across various applications, including medical analysis and recommendation systems.
We propose a lightweight hardware accelerator for HGNNs, called SiHGNN. This accelerator incorporates a tree-based Semantic Graph Builder for efficient semantic graph generation and features a novel Graph Restructurer for optimizing semantic graph layouts.
arXiv Detail & Related papers (2024-08-27T14:20:21Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Architectural Implications of Embedding Dimension during GCN on CPU and
GPU [6.650945912906685]
Graph Convolutional Networks (GCNs) are a widely used type of GNN for transductive graph learning problems.
GCN is a challenging algorithm from an architecture perspective due to inherent sparsity, low data reuse, and massive memory capacity requirements.
arXiv Detail & Related papers (2022-12-01T19:23:12Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms [1.2183405753834562]
Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art deep learning model for representation learning on graphs.
It is challenging to accelerate training of GCNs due to substantial and irregular data communication.
We design a novel accelerator for training GCNs on CPU-FPGA heterogeneous systems.
arXiv Detail & Related papers (2019-12-31T21:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.