G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and
Efficiency
- URL: http://arxiv.org/abs/2109.08983v1
- Date: Sat, 18 Sep 2021 18:36:04 GMT
- Title: G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and
Efficiency
- Authors: Yongan Zhang, Haoran You, Yonggan Fu, Tong Geng, Ang Li, Yingyan Lin
- Abstract summary: Graph Neural Networks (GNNs) have emerged as the state-of-the-art (SOTA) method for graph-based learning tasks.
We propose G-CoS, a GNN and accelerator co-search framework that can automatically search for matched GNN structures and accelerators.
Experiments and ablation studies show that the GNNs generated by G-CoS consistently outperform SOTA GNNs and GNN accelerators in terms of both task accuracy and hardware efficiency.
- Score: 28.379932311374624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as the state-of-the-art (SOTA)
method for graph-based learning tasks. However, it still remains prohibitively
challenging to inference GNNs over large graph datasets, limiting their
application to large-scale real-world tasks. While end-to-end jointly
optimizing GNNs and their accelerators is promising in boosting GNNs' inference
efficiency and expediting the design process, it is still underexplored due to
the vast and distinct design spaces of GNNs and their accelerators. In this
work, we propose G-CoS, a GNN and accelerator co-search framework that can
automatically search for matched GNN structures and accelerators to maximize
both task accuracy and acceleration efficiency. Specifically, GCoS integrates
two major enabling components: (1) a generic GNN accelerator search space which
is applicable to various GNN structures and (2) a one-shot GNN and accelerator
co-search algorithm that enables simultaneous and efficient search for optimal
GNN structures and their matched accelerators. To the best of our knowledge,
G-CoS is the first co-search framework for GNNs and their accelerators.
Extensive experiments and ablation studies show that the GNNs and accelerators
generated by G-CoS consistently outperform SOTA GNNs and GNN accelerators in
terms of both task accuracy and hardware efficiency, while only requiring a few
hours for the end-to-end generation of the best matched GNNs and their
accelerators.
Related papers
- Spiking Graph Neural Network on Riemannian Manifolds [51.15400848660023]
Graph neural networks (GNNs) have become the dominant solution for learning on graphs.
Existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry.
We present a Manifold-valued Spiking GNN (MSG)
MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.
arXiv Detail & Related papers (2024-10-23T15:09:02Z) - Unleash Graph Neural Networks from Heavy Tuning [33.948899558876604]
Graph Neural Networks (GNNs) are deep-learning architectures designed for graph-type data.
We propose a graph conditional latent diffusion framework (GNN-Diff) to generate high-performing GNNs directly by learning from checkpoints saved during a light-tuning coarse search.
arXiv Detail & Related papers (2024-05-21T06:23:47Z) - MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - GHOST: A Graph Neural Network Accelerator using Silicon Photonics [4.226093500082746]
Graph neural networks (GNNs) have emerged as a powerful approach for modelling and learning from graph-structured data.
We present GHOST, the first silicon-photonic hardware accelerator for GNNs.
arXiv Detail & Related papers (2023-07-04T15:37:20Z) - A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and
Customized Hardware [30.525912505620685]
Graph neural networks (GNNs) are emerging for machine learning research on graph-structured data.
GNNs achieve state-of-the-art performance on many tasks, but they face scalability challenges when it comes to real-world applications.
We provide a taxonomy of GNN acceleration, review the existing approaches, and suggest future research directions.
arXiv Detail & Related papers (2023-06-24T20:20:45Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - GNN Transformation Framework for Improving Efficiency and Scalability [5.833671647960204]
We propose a framework that automatically transforms non-scalable GNNs into precomputation-based GNNs which are efficient and scalable for large-scale graphs.
The advantages of our framework are two-fold; 1) it transforms various non-scalable GNNs to scale well to large-scale graphs by separating local feature aggregation from weight learning in their graph convolution, 2) it efficiently executes precomputation on GPU for large-scale graphs by decomposing their edges into small disjoint and balanced sets.
arXiv Detail & Related papers (2022-07-25T09:19:59Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - DNA: Differentiable Network-Accelerator Co-Search [36.68587348474986]
We propose DNA, a Differentiable Network-Accelerator co-search framework for automatically searching for matched networks and accelerators.
Specifically, DNA integrates two enablers: (1) a generic design space for DNN accelerators and compatible with DNN frameworks such as PyTorch to enable algorithmic exploration.
Experiments and ablation studies show that the matched networks and accelerators generated by DNA consistently outperform state-of-the-art (SOTA) DNNs and accelerators.
arXiv Detail & Related papers (2020-10-28T05:57:16Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.