When Does Self-Supervision Help Graph Convolutional Networks?
- URL: http://arxiv.org/abs/2006.09136v4
- Date: Sat, 18 Jul 2020 00:24:26 GMT
- Title: When Does Self-Supervision Help Graph Convolutional Networks?
- Authors: Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
- Abstract summary: Self-supervision as an emerging technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of images.
In this study, we report the first systematic exploration of incorporating self-supervision into graph convolutional networks (GCNs)
Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness.
- Score: 118.37805042816784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervision as an emerging technique has been employed to train
convolutional neural networks (CNNs) for more transferrable, generalizable, and
robust representation learning of images. Its introduction to graph
convolutional networks (GCNs) operating on graph data is however rarely
explored. In this study, we report the first systematic exploration and
assessment of incorporating self-supervision into GCNs. We first elaborate
three mechanisms to incorporate self-supervision into GCNs, analyze the
limitations of pretraining & finetuning and self-training, and proceed to focus
on multi-task learning. Moreover, we propose to investigate three novel
self-supervised learning tasks for GCNs with theoretical rationales and
numerical comparisons. Lastly, we further integrate multi-task self-supervision
into graph adversarial training. Our results show that, with properly designed
task forms and incorporation mechanisms, self-supervision benefits GCNs in
gaining more generalizability and robustness. Our codes are available at
https://github.com/Shen-Lab/SS-GCNs.
Related papers
- Applying Self-supervised Learning to Network Intrusion Detection for
Network Flows with Graph Neural Network [8.318363497010969]
This paper studies the application of GNNs to identify the specific types of network flows in an unsupervised manner.
To the best of our knowledge, it is the first GNN-based self-supervised method for the multiclass classification of network flows in NIDS.
arXiv Detail & Related papers (2024-03-03T12:34:13Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Convolutional Neural Network Dynamics: A Graph Perspective [39.81881710355496]
We take a graph perspective and investigate the relationship between the graph structure of NNs and their performance.
For the dynamic graph representation of NNs, we explore structural representations for fully-connected and convolutional layers.
Our analysis shows that a simple summary of graph statistics can be used to accurately predict the performance of NNs.
arXiv Detail & Related papers (2021-11-09T20:38:48Z) - Self-supervised Auxiliary Learning for Graph Neural Networks via
Meta-Learning [16.847149163314462]
We propose a novel self-supervised auxiliary learning framework to effectively learn graph neural networks.
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.
Our methods can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data.
arXiv Detail & Related papers (2021-03-01T05:52:57Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Self-supervised Training of Graph Convolutional Networks [39.80867112204255]
Graph Convolutional Networks (GCNs) have been successfully applied to analyze non-grid data.
In this paper, we propose two types of self-supervised learning strategies to exploit available information from the input graph structure data itself.
arXiv Detail & Related papers (2020-06-03T16:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.