Automated Self-Supervised Learning for Graphs
- URL: http://arxiv.org/abs/2106.05470v1
- Date: Thu, 10 Jun 2021 03:09:20 GMT
- Title: Automated Self-Supervised Learning for Graphs
- Authors: Wei Jin, Xiaorui Liu, Xiangyu Zhao, Yao Ma, Neil Shah, Jiliang Tang
- Abstract summary: This work aims to investigate how to automatically leverage multiple pretext tasks effectively.
We make use of a key principle of many real-world graphs, i.e., homophily, as the guidance to effectively search various self-supervised pretext tasks.
We propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks.
- Score: 37.14382990139527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph self-supervised learning has gained increasing attention due to its
capacity to learn expressive node representations. Many pretext tasks, or loss
functions have been designed from distinct perspectives. However, we observe
that different pretext tasks affect downstream tasks differently cross
datasets, which suggests that searching pretext tasks is crucial for graph
self-supervised learning. Different from existing works focusing on designing
single pretext tasks, this work aims to investigate how to automatically
leverage multiple pretext tasks effectively. Nevertheless, evaluating
representations derived from multiple pretext tasks without direct access to
ground truth labels makes this problem challenging. To address this obstacle,
we make use of a key principle of many real-world graphs, i.e., homophily, or
the principle that ``like attracts like,'' as the guidance to effectively
search various self-supervised pretext tasks. We provide theoretical
understanding and empirical evidence to justify the flexibility of homophily in
this search task. Then we propose the AutoSSL framework which can automatically
search over combinations of various self-supervised tasks. By evaluating the
framework on 7 real-world datasets, our experimental results show that AutoSSL
can significantly boost the performance on downstream tasks including node
clustering and node classification compared with training under individual
tasks. Code will be released at https://github.com/ChandlerBang/AutoSSL.
Related papers
- Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach [28.194940062243003]
Class-incremental learning (CIL) aims to continually learn a sequence of tasks, with each task consisting of a set of unique classes.
The key characteristic of CIL lies in the absence of task identifiers (IDs) during inference.
We show theoretically that accurate task ID prediction on graph data can be achieved by a Laplacian smoothing-based graph task profiling approach.
arXiv Detail & Related papers (2024-10-14T09:54:20Z) - Exploring Correlations of Self-Supervised Tasks for Graphs [6.977921096191354]
This paper aims to provide a fresh understanding of graph self-supervised learning based on task correlations.
We evaluate the performance of the representations trained by one specific task on other tasks and define correlation values to quantify task correlations.
We propose Graph Task Correlation Modeling (GraphTCM) to illustrate the task correlations and utilize it to enhance graph self-supervised training.
arXiv Detail & Related papers (2024-05-07T12:02:23Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Mixture of Self-Supervised Learning [2.191505742658975]
Self-supervised learning works by using a pretext task which will be trained on the model before being applied to a specific task.
Previous studies have only used one type of transformation as a pretext task.
This raises the question of how it affects if more than one pretext task is used and to use a gating network to combine all pretext tasks.
arXiv Detail & Related papers (2023-07-27T14:38:32Z) - Unsupervised Task Graph Generation from Instructional Video Transcripts [53.54435048879365]
We consider a setting where text transcripts of instructional videos performing a real-world activity are provided.
The goal is to identify the key steps relevant to the task as well as the dependency relationship between these key steps.
We propose a novel task graph generation approach that combines the reasoning capabilities of instruction-tuned language models along with clustering and ranking components.
arXiv Detail & Related papers (2023-02-17T22:50:08Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Multi-task Self-supervised Graph Neural Networks Enable Stronger Task
Generalization [40.265515914447924]
Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted increasing attention from the machine learning community in recent years.
One weakness of conventional SSL frameworks for GNNs is that they learn through a single philosophy.
arXiv Detail & Related papers (2022-10-05T04:09:38Z) - X-Learner: Learning Cross Sources and Tasks for Universal Visual
Representation [71.51719469058666]
We propose a representation learning framework called X-Learner.
X-Learner learns the universal feature of multiple vision tasks supervised by various sources.
X-Learner achieves strong performance on different tasks without extra annotations, modalities and computational costs.
arXiv Detail & Related papers (2022-03-16T17:23:26Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks [0.0]
Recent work on few-shot learning shows that quality of learned representations plays an important role in few-shot classification performance.
On the other hand, the goal of self-supervised learning is to recover useful semantic information of the data without the use of class labels.
We exploit the complementarity of both paradigms via a multi-task framework where we leverage recent self-supervised methods as auxiliary tasks.
arXiv Detail & Related papers (2021-01-24T23:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.