Architectural Implications of Graph Neural Networks
- URL: http://arxiv.org/abs/2009.00804v2
- Date: Fri, 24 Dec 2021 14:59:46 GMT
- Title: Architectural Implications of Graph Neural Networks
- Authors: Zhihui Zhang, Jingwen Leng, Lingxiao Ma, Youshan Miao, Chao Li, Minyi
Guo
- Abstract summary: Graph neural networks (GNN) represent an emerging line of deep learning models that operate on graph structures.
GNN is not as well understood in the system and architecture community as its counterparts such as multi-layer perceptrons and convolutional neural networks.
- Score: 17.01480604968118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNN) represent an emerging line of deep learning
models that operate on graph structures. It is becoming more and more popular
due to its high accuracy achieved in many graph-related tasks. However, GNN is
not as well understood in the system and architecture community as its
counterparts such as multi-layer perceptrons and convolutional neural networks.
This work tries to introduce the GNN to our community. In contrast to prior
work that only presents characterizations of GCNs, our work covers a large
portion of the varieties for GNN workloads based on a general GNN description
framework. By constructing the models on top of two widely-used libraries, we
characterize the GNN computation at inference stage concerning general-purpose
and application-specific architectures and hope our work can foster more system
and architecture research for GNNs.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
Graph Neural Networks (GNNs) combine information from adjacent nodes by successive applications of graph convolutions.
We study the generalization gaps of GNNs on both node-level and graph-level tasks.
We show that the generalization gaps decrease with the number of nodes in the training graphs.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - The Evolution of Distributed Systems for Graph Neural Networks and their
Origin in Graph Processing and Deep Learning: A Survey [17.746899445454048]
Graph Neural Networks (GNNs) are an emerging research field.
GNNs can be applied to various domains including recommendation systems, computer vision, natural language processing, biology and chemistry.
We aim to fill this gap by summarizing and categorizing important methods and techniques for large-scale GNN solutions.
arXiv Detail & Related papers (2023-05-23T09:22:33Z) - Characterizing the Efficiency of Graph Neural Network Frameworks with a
Magnifying Glass [10.839902229218577]
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks.
Recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs.
It is unknown how much the frameworks are 'eco-friendly' from a green computing perspective.
arXiv Detail & Related papers (2022-11-06T04:22:19Z) - Deep Ensembles for Graphs with Higher-order Dependencies [13.164412455321907]
Graph neural networks (GNNs) continue to achieve state-of-the-art performance on many graph learning tasks.
We show that the tendency of traditional graph representations to underfit each node's neighborhood causes existing GNNs to generalize poorly.
We propose a novel Deep Graph Ensemble (DGE) which captures neighborhood variance by training an ensemble of GNNs on different neighborhood subspaces of the same node.
arXiv Detail & Related papers (2022-05-27T14:01:08Z) - IGNNITION: Bridging the Gap Between Graph Neural Networks and Networking
Systems [4.1591055164123665]
We present IGNNITION, a novel open-source framework that enables fast prototyping of Graph Neural Networks (GNNs) for networking systems.
IGNNITION is based on an intuitive high-level abstraction that hides the complexity behind GNNs.
Our results show that the GNN models produced by IGNNITION are equivalent in terms of accuracy and performance to their native implementations.
arXiv Detail & Related papers (2021-09-14T14:28:21Z) - Design Space for Graph Neural Networks [81.88707703106232]
We study the architectural design space for Graph Neural Networks (GNNs) which consists of 315,000 different designs over 32 different predictive tasks.
Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-11-17T18:59:27Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.