Data Augmentation View on Graph Convolutional Network and the Proposal
of Monte Carlo Graph Learning
- URL: http://arxiv.org/abs/2006.13090v1
- Date: Tue, 23 Jun 2020 15:25:05 GMT
- Title: Data Augmentation View on Graph Convolutional Network and the Proposal
of Monte Carlo Graph Learning
- Authors: Hande Dong, Zhaolin Ding, Xiangnan He, Fuli Feng and Shuxian Bi
- Abstract summary: We introduce data augmentation, which is more transparent than the previous understandings.
Inspired by it, we propose a new graph learning paradigm -- Monte Carlo Graph Learning (MCGL)
We show that MCGL's tolerance to graph structure noise is weaker than GCN on noisy graphs.
- Score: 51.03995934179918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today, there are two major understandings for graph convolutional networks,
i.e., in the spectral and spatial domain. But both lack transparency. In this
work, we introduce a new understanding for it -- data augmentation, which is
more transparent than the previous understandings. Inspired by it, we propose a
new graph learning paradigm -- Monte Carlo Graph Learning (MCGL). The core idea
of MCGL contains: (1) Data augmentation: propagate the labels of the training
set through the graph structure and expand the training set; (2) Model
training: use the expanded training set to train traditional classifiers. We
use synthetic datasets to compare the strengths of MCGL and graph convolutional
operation on clean graphs. In addition, we show that MCGL's tolerance to graph
structure noise is weaker than GCN on noisy graphs (four real-world datasets).
Moreover, inspired by MCGL, we re-analyze the reasons why the performance of
GCN becomes worse when deepened too much: rather than the mainstream view of
over-smoothing, we argue that the main reason is the graph structure noise, and
experimentally verify our view. The code is available at
https://github.com/DongHande/MCGL.
Related papers
- Community-Invariant Graph Contrastive Learning [21.72222875193335]
This research investigates the role of the graph community in graph augmentation.
We propose a community-invariant GCL framework to maintain graph community structure during learnable graph augmentation.
arXiv Detail & Related papers (2024-05-02T14:59:58Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Scaling R-GCN Training with Graph Summarization [71.06855946732296]
Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph.
In this work, we experiment with the use of graph summarization techniques to compress the graph.
We obtain reasonable results on the AIFB, MUTAG and AM datasets.
arXiv Detail & Related papers (2022-03-05T00:28:43Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.