GMLP: Building Scalable and Flexible Graph Neural Networks with
Feature-Message Passing
- URL: http://arxiv.org/abs/2104.09880v1
- Date: Tue, 20 Apr 2021 10:19:21 GMT
- Title: GMLP: Building Scalable and Flexible Graph Neural Networks with
Feature-Message Passing
- Authors: Wentao Zhang, Yu Shen, Zheyu Lin, Yang Li, Xiaosen Li, Wen Ouyang,
Yangyu Tao, Zhi Yang, and Bin Cui
- Abstract summary: Graph Multi-layer Perceptron (GMLP) separates the neural update from the message passing.
We conduct extensive evaluations on 11 benchmark datasets.
- Score: 16.683813354137254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent studies, neural message passing has proved to be an effective way
to design graph neural networks (GNNs), which have achieved state-of-the-art
performance in many graph-based tasks. However, current neural-message passing
architectures typically need to perform an expensive recursive neighborhood
expansion in multiple rounds and consequently suffer from a scalability issue.
Moreover, most existing neural-message passing schemes are inflexible since
they are restricted to fixed-hop neighborhoods and insensitive to the actual
demands of different nodes. We circumvent these limitations by a novel
feature-message passing framework, called Graph Multi-layer Perceptron (GMLP),
which separates the neural update from the message passing. With such
separation, GMLP significantly improves the scalability and efficiency by
performing the message passing procedure in a pre-compute manner, and is
flexible and adaptive in leveraging node feature messages over various levels
of localities. We further derive novel variants of scalable GNNs under this
framework to achieve the best of both worlds in terms of performance and
efficiency. We conduct extensive evaluations on 11 benchmark datasets,
including large-scale datasets like ogbn-products and an industrial dataset,
demonstrating that GMLP achieves not only the state-of-art performance, but
also high training scalability and efficiency.
Related papers
- Graph as a feature: improving node classification with non-neural graph-aware logistic regression [2.952177779219163]
Graph-aware Logistic Regression (GLR) is a non-neural model designed for node classification tasks.
Unlike traditional graph algorithms that use only a fraction of the information accessible to GNNs, our proposed model simultaneously leverages both node features and the relationships between entities.
arXiv Detail & Related papers (2024-11-19T08:32:14Z) - Spatiotemporal Learning on Cell-embedded Graphs [6.8090864965073274]
We introduce a learnable cell attribution to the node-edge message passing process, which better captures the spatial dependency of regional features.
Experiments on various PDE systems and one real-world dataset demonstrate that CeGNN achieves superior performance compared with other baseline models.
arXiv Detail & Related papers (2024-09-26T16:22:08Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Efficient and effective training of language and graph neural network
models [36.00479096375565]
We put forth an efficient and effective framework termed language model GNN (LM-GNN) to jointly train large-scale language models and graph neural networks.
The effectiveness in our framework is achieved by applying stage-wise fine-tuning of the BERT model first with heterogenous graph information and then with a GNN model.
We evaluate the LM-GNN framework in different datasets performance and showcase the effectiveness of the proposed approach.
arXiv Detail & Related papers (2022-06-22T00:23:37Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Simplifying approach to Node Classification in Graph Neural Networks [7.057970273958933]
We decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance.
We show that not all features generated via aggregation steps are useful, and often using these less informative features can be detrimental to the performance of the GNN model.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model achieves comparable or even higher accuracy than state-of-the-art GNN models.
arXiv Detail & Related papers (2021-11-12T14:53:22Z) - Graph Attention Multi-Layer Perceptron [12.129233487384965]
Graph neural networks (GNNs) have recently achieved state-of-the-art performance in many graph-based applications.
We introduce a scalable and flexible Graph Attention Multilayer Perceptron (GAMLP)
With three principled receptive field attention, each node in GAMLP is flexible and adaptive in leveraging the propagated features over the different sizes of reception field.
arXiv Detail & Related papers (2021-08-23T11:56:20Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.