Training Graph Neural Networks by Graphon Estimation
- URL: http://arxiv.org/abs/2109.01918v1
- Date: Sat, 4 Sep 2021 19:21:48 GMT
- Title: Training Graph Neural Networks by Graphon Estimation
- Authors: Ziqing Hu, Yihao Fang, Lizhen Lin
- Abstract summary: We propose to train a graph neural network via resampling from a graphon estimate obtained from the underlying network data.
We show that our approach is competitive with and in many cases outperform the other over-smoothing reducing GNN training methods.
- Score: 2.5997274006052544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose to train a graph neural network via resampling from
a graphon estimate obtained from the underlying network data. More
specifically, the graphon or the link probability matrix of the underlying
network is first obtained from which a new network will be resampled and used
during the training process at each layer. Due to the uncertainty induced from
the resampling, it helps mitigate the well-known issue of over-smoothing in a
graph neural network (GNN) model. Our framework is general, computationally
efficient, and conceptually simple. Another appealing feature of our method is
that it requires minimal additional tuning during the training process.
Extensive numerical results show that our approach is competitive with and in
many cases outperform the other over-smoothing reducing GNN training methods.
Related papers
- Stealing Training Graphs from Graph Neural Networks [54.52392250297907]
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks.
As neural networks can memorize the training samples, the model parameters of GNNs have a high risk of leaking private training data.
We investigate a novel problem of stealing graphs from trained GNNs.
arXiv Detail & Related papers (2024-11-17T23:15:36Z) - Finding Hamiltonian cycles with graph neural networks [0.0]
We train a small message-passing graph neural network to predict Hamiltonian cycles on ErdHos-Rnyi'e random graphs.
The model generalizes well to larger graph sizes and retains reasonable performance even on graphs eight times the original size.
arXiv Detail & Related papers (2023-06-10T21:18:31Z) - Graph Neural Networks Go Forward-Forward [0.0]
We present the Graph Forward-Forward (GFF) algorithm, an extension of the Forward-Forward procedure to graphs.
Our method is to the message-passing scheme, and provides a more biologically plausible learning scheme than backpropagation.
We run experiments on 11 standard graph property prediction tasks, showing how GFF provides an effective alternative to backpropagation.
arXiv Detail & Related papers (2023-02-10T14:45:36Z) - Unlearning Graph Classifiers with Limited Data Resources [39.29148804411811]
Controlled data removal is becoming an important feature of machine learning models for data-sensitive Web applications.
It is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs)
Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs.
Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism.
Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up
arXiv Detail & Related papers (2022-11-06T20:46:50Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Scalable Consistency Training for Graph Neural Networks via
Self-Ensemble Self-Distillation [13.815063206114713]
We introduce a novel consistency training method to improve accuracy of graph neural networks (GNNs)
For a target node we generate different neighborhood expansions, and distill the knowledge of the average of the predictions to the GNN.
Our method approximates the expected prediction of the possible neighborhood samples and practically only requires a few samples.
arXiv Detail & Related papers (2021-10-12T19:24:42Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Variational models for signal processing with Graph Neural Networks [3.5939555573102853]
This paper is devoted to signal processing on point-clouds by means of neural networks.
In this work, we investigate the use of variational models for such Graph Neural Networks to process signals on graphs for unsupervised learning.
arXiv Detail & Related papers (2021-03-30T13:31:11Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.