Rethinking Graph Regularization for Graph Neural Networks
- URL: http://arxiv.org/abs/2009.02027v2
- Date: Sun, 20 Dec 2020 15:52:58 GMT
- Title: Rethinking Graph Regularization for Graph Neural Networks
- Authors: Han Yang and Kaili Ma and James Cheng
- Abstract summary: We show that graph Laplacian regularization brings little-to-no benefit to existing graph neural networks (GNNs)
We propose a simple but non-trivial variant of graph Laplacian regularization, called propagation-regularization (P-reg)
We demonstrate that P-reg can effectively boost the performance of existing GNN models on both node-level and graph-level tasks.
- Score: 21.32758655943999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The graph Laplacian regularization term is usually used in semi-supervised
representation learning to provide graph structure information for a model
$f(X)$. However, with the recent popularity of graph neural networks (GNNs),
directly encoding graph structure $A$ into a model, i.e., $f(A, X)$, has become
the more common approach. While we show that graph Laplacian regularization
brings little-to-no benefit to existing GNNs, and propose a simple but
non-trivial variant of graph Laplacian regularization, called
Propagation-regularization (P-reg), to boost the performance of existing GNN
models. We provide formal analyses to show that P-reg not only infuses extra
information (that is not captured by the traditional graph Laplacian
regularization) into GNNs, but also has the capacity equivalent to an
infinite-depth graph convolutional network. We demonstrate that P-reg can
effectively boost the performance of existing GNN models on both node-level and
graph-level tasks across many different datasets.
Related papers
- SizeShiftReg: a Regularization Method for Improving Size-Generalization
in Graph Neural Networks [5.008597638379227]
Graph neural networks (GNNs) have become the de facto model of choice for graph classification.
We propose a regularization strategy that can be applied to any GNN to improve its generalization capabilities without requiring access to the test data.
Our regularization is based on the idea of simulating a shift in the size of the training graphs using coarsening techniques.
arXiv Detail & Related papers (2022-07-16T09:50:45Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - IV-GNN : Interval Valued Data Handling Using Graph Neural Network [12.651341660194534]
Graph Neural Network (GNN) is a powerful tool to perform standard machine learning on graphs.
This article proposes an Interval-ValuedGraph Neural Network, a novel GNN model where, for the first time, we relax the restriction of the feature space being countable.
Our model is much more general than existing models as any countable set is always a subset of the universal set $Rn$, which is uncountable.
arXiv Detail & Related papers (2021-11-17T15:37:09Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - GraphNorm: A Principled Approach to Accelerating Graph Neural Network
Training [101.3819906739515]
We study what normalization is effective for Graph Neural Networks (GNNs)
Faster convergence is achieved with InstanceNorm compared to BatchNorm and LayerNorm.
GraphNorm also improves the generalization of GNNs, achieving better performance on graph classification benchmarks.
arXiv Detail & Related papers (2020-09-07T17:55:21Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.