MuseGNN: Interpretable and Convergent Graph Neural Network Layers at
Scale
- URL: http://arxiv.org/abs/2310.12457v1
- Date: Thu, 19 Oct 2023 04:30:14 GMT
- Title: MuseGNN: Interpretable and Convergent Graph Neural Network Layers at
Scale
- Authors: Haitian Jiang, Renjie Liu, Xiao Yan, Zhenkun Cai, Minjie Wang, David
Wipf
- Abstract summary: We propose a sampling-based energy function and scalable GNN layers that iteratively reduce it, guided by convergence guarantees in certain settings.
We also instantiate a full GNN architecture based on these designs, and the model achieves competitive accuracy and scalability when applied to the largest publicly-available node classification benchmark exceeding 1TB in size.
- Score: 15.93424606182961
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Among the many variants of graph neural network (GNN) architectures capable
of modeling data with cross-instance relations, an important subclass involves
layers designed such that the forward pass iteratively reduces a
graph-regularized energy function of interest. In this way, node embeddings
produced at the output layer dually serve as both predictive features for
solving downstream tasks (e.g., node classification) and energy function
minimizers that inherit desirable inductive biases and interpretability.
However, scaling GNN architectures constructed in this way remains challenging,
in part because the convergence of the forward pass may involve models with
considerable depth. To tackle this limitation, we propose a sampling-based
energy function and scalable GNN layers that iteratively reduce it, guided by
convergence guarantees in certain settings. We also instantiate a full GNN
architecture based on these designs, and the model achieves competitive
accuracy and scalability when applied to the largest publicly-available node
classification benchmark exceeding 1TB in size.
Related papers
- Graph as a feature: improving node classification with non-neural graph-aware logistic regression [2.952177779219163]
Graph-aware Logistic Regression (GLR) is a non-neural model designed for node classification tasks.
Unlike traditional graph algorithms that use only a fraction of the information accessible to GNNs, our proposed model simultaneously leverages both node features and the relationships between entities.
arXiv Detail & Related papers (2024-11-19T08:32:14Z) - Rethinking Graph Transformer Architecture Design for Node Classification [4.497245600377944]
Graph Transformer (GT) is a special type of Graph Neural Networks (GNNs) that utilize multi-head attention to facilitate high-order message passing.
In this work, we conduct observational experiments to explore the adaptability of the GT architecture in node classification tasks.
Our proposed GT architecture can effectively adapt to node classification tasks without being affected by global noise and computational efficiency limitations.
arXiv Detail & Related papers (2024-10-15T02:08:16Z) - Binary Graph Convolutional Network with Capacity Exploration [58.99478502486377]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node attributes.
Our Bi-GCN can reduce the memory consumption by an average of 31x for both the network parameters and input data, and accelerate the inference speed by an average of 51x.
arXiv Detail & Related papers (2022-10-24T12:05:17Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - ASGNN: Graph Neural Networks with Adaptive Structure [41.83813812538167]
We propose a novel interpretable message passing scheme with adaptive structure (ASMP) to defend against adversarial attacks on graph structure.
ASMP is adaptive in the sense that the message passing process in different layers is able to be carried out over dynamically adjusted graphs.
arXiv Detail & Related papers (2022-10-03T15:10:40Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Superiority of GNN over NN in generalizing bandlimited functions [6.3151583550712065]
Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications.
In this study, we investigate the proficiency of GNNs for such classifications, which can also be cast as a function problem.
Our findings highlight a pronounced efficiency in utilizing GNNs to generalize a bandlimited function within an $varepsilon$-error margin.
arXiv Detail & Related papers (2022-06-13T05:15:12Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - NDGGNET-A Node Independent Gate based Graph Neural Networks [6.155450481110693]
For nodes with sparse connectivity, it is difficult to obtain enough information through a single GNN layer.
In this thesis, we define a novel framework that allows the normal GNN model to accommodate more layers.
Experimental results show that our proposed model can effectively increase the model depth and perform well on several datasets.
arXiv Detail & Related papers (2022-05-11T08:51:04Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.