Graph-MLP: Node Classification without Message Passing in Graph
- URL: http://arxiv.org/abs/2106.04051v1
- Date: Tue, 8 Jun 2021 02:07:21 GMT
- Title: Graph-MLP: Node Classification without Message Passing in Graph
- Authors: Yang Hu, Haoxuan You, Zhecan Wang, Zhicheng Wang, Erjin Zhou, Yue Gao
- Abstract summary: Graph Neural Network (GNN) has been demonstrated its effectiveness in dealing with non-Euclidean structural data.
Recent works have mainly focused on powerful message passing modules, however, in this paper, we show that none of the message passing modules is necessary.
We propose a pure multilayer-perceptron-based framework, Graph-MLP with the supervision signal leveraging graph structure.
- Score: 28.604893350871777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Network (GNN) has been demonstrated its effectiveness in dealing
with non-Euclidean structural data. Both spatial-based and spectral-based GNNs
are relying on adjacency matrix to guide message passing among neighbors during
feature aggregation. Recent works have mainly focused on powerful message
passing modules, however, in this paper, we show that none of the message
passing modules is necessary. Instead, we propose a pure
multilayer-perceptron-based framework, Graph-MLP with the supervision signal
leveraging graph structure, which is sufficient for learning discriminative
node representation. In model-level, Graph-MLP only includes multi-layer
perceptrons, activation function, and layer normalization. In the loss level,
we design a neighboring contrastive (NContrast) loss to bridge the gap between
GNNs and MLPs by utilizing the adjacency information implicitly. This design
allows our model to be lighter and more robust when facing large-scale graph
data and corrupted adjacency information. Extensive experiments prove that even
without adjacency information in testing phase, our framework can still reach
comparable and even superior performance against the state-of-the-art models in
the graph node classification task.
Related papers
- Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Implicit Graph Neural Diffusion Networks: Convergence, Generalization,
and Over-Smoothing [7.984586585987328]
Implicit Graph Neural Networks (GNNs) have achieved significant success in addressing graph learning problems.
We introduce a geometric framework for designing implicit graph diffusion layers based on a parameterized graph Laplacian operator.
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem.
arXiv Detail & Related papers (2023-08-07T05:22:33Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Multi-Level Attention Pooling for Graph Neural Networks: Unifying Graph
Representations with Multiple Localities [4.142375560633827]
Graph neural networks (GNNs) have been widely used to learn vector representation of graph-structured data.
A potential cause is that deep GNN models tend to lose the nodes' local information through many message passing steps.
We propose a multi-level attention pooling architecture to solve this so-called oversmoothing problem.
arXiv Detail & Related papers (2021-03-02T05:58:12Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - SAIL: Self-Augmented Graph Contrastive Learning [40.76236706250037]
This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario.
We derive a theoretical analysis and provide an empirical demonstration about the non-steady performance of GNNs over different graph datasets.
arXiv Detail & Related papers (2020-09-02T10:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.