Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks
- URL: http://arxiv.org/abs/2011.07267v2
- Date: Fri, 4 Dec 2020 14:37:52 GMT
- Title: Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks
- Authors: Franco Manessi, Alessandro Rozza
- Abstract summary: Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
- Score: 79.28094304325116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning is currently gaining a lot of attention, as it
allows neural networks to learn robust representations from large quantities of
unlabeled data. Additionally, multi-task learning can further improve
representation learning by training networks simultaneously on related tasks,
leading to significant performance improvements. In this paper, we propose
three novel self-supervised auxiliary tasks to train graph-based neural network
models in a multi-task fashion. Since Graph Convolutional Networks are among
the most promising approaches for capturing relationships among structured data
points, we use them as a building block to achieve competitive results on
standard semi-supervised graph classification tasks.
Related papers
- Deep Dependency Networks for Multi-Label Classification [24.24496964886951]
We show that the performance of previous approaches that combine Markov Random Fields with neural networks can be modestly improved.
We propose a new modeling framework called deep dependency networks, which augments a dependency network.
Despite its simplicity, jointly learning this new architecture yields significant improvements in performance.
arXiv Detail & Related papers (2023-02-01T17:52:40Z) - Multi network InfoMax: A pre-training method involving graph
convolutional networks [0.0]
This paper presents a pre-training method involving graph convolutional/neural networks (GCNs/GNNs)
The learned high-level graph latent representations help increase performance for downstream graph classification tasks.
We apply our method to a neuroimaging dataset for classifying subjects into healthy control (HC) and schizophrenia (SZ) groups.
arXiv Detail & Related papers (2021-11-01T21:53:20Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - Self-supervised Auxiliary Learning for Graph Neural Networks via
Meta-Learning [16.847149163314462]
We propose a novel self-supervised auxiliary learning framework to effectively learn graph neural networks.
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.
Our methods can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data.
arXiv Detail & Related papers (2021-03-01T05:52:57Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Progressive Graph Convolutional Networks for Semi-Supervised Node
Classification [97.14064057840089]
Graph convolutional networks have been successful in addressing graph-based tasks such as semi-supervised node classification.
We propose a method to automatically build compact and task-specific graph convolutional networks.
arXiv Detail & Related papers (2020-03-27T08:32:16Z) - Affinity Graph Supervision for Visual Recognition [35.35959846458965]
We propose a principled method to supervise the learning of weights in affinity graphs.
Our affinity supervision improves relationship recovery between objects, even without manually annotated relationship labels.
We show that affinity learning can also be applied to graphs built from mini-batches, for neural network training.
arXiv Detail & Related papers (2020-03-19T23:52:51Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.