Self-supervised Learning: Generative or Contrastive
- URL: http://arxiv.org/abs/2006.08218v5
- Date: Sat, 20 Mar 2021 05:07:03 GMT
- Title: Self-supervised Learning: Generative or Contrastive
- Authors: Xiao Liu, Fanjin Zhang, Zhenyu Hou, Zhaoyu Wang, Li Mian, Jing Zhang,
Jie Tang
- Abstract summary: Self-supervised learning has soaring performance on representation learning in the last several years.
We take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning.
- Score: 16.326494162366973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep supervised learning has achieved great success in the last decade.
However, its deficiencies of dependence on manual labels and vulnerability to
attacks have driven people to explore a better solution. As an alternative,
self-supervised learning attracts many researchers for its soaring performance
on representation learning in the last several years. Self-supervised
representation learning leverages input data itself as supervision and benefits
almost all types of downstream tasks. In this survey, we take a look into new
self-supervised learning methods for representation in computer vision, natural
language processing, and graph learning. We comprehensively review the existing
empirical methods and summarize them into three main categories according to
their objectives: generative, contrastive, and generative-contrastive
(adversarial). We further investigate related theoretical analysis work to
provide deeper thoughts on how self-supervised learning works. Finally, we
briefly discuss open problems and future directions for self-supervised
learning. An outline slide for the survey is provided.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Unleash Model Potential: Bootstrapped Meta Self-supervised Learning [12.57396771974944]
Long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision.
Self-supervised learning and meta-learning are two promising techniques to achieve this goal, but they both only partially capture the advantages.
We propose a novel Bootstrapped Meta Self-Supervised Learning framework that aims to simulate the human learning process.
arXiv Detail & Related papers (2023-08-28T02:49:07Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Recent Advancements in Self-Supervised Paradigms for Visual Feature
Representation [0.41436032949434404]
Supervised learning requires a large amount of labeled data to reach state-of-the-art performance.
To avoid the cost of labeling data, self-supervised methods were proposed to make use of largely available unlabeled data.
This study conducts a comprehensive and insightful survey and analysis of recent developments in the self-supervised paradigm for feature representation.
arXiv Detail & Related papers (2021-11-03T07:02:34Z) - Progressive Stage-wise Learning for Unsupervised Feature Representation
Enhancement [83.49553735348577]
We propose the Progressive Stage-wise Learning (PSL) framework for unsupervised learning.
Our experiments show that PSL consistently improves results for the leading unsupervised learning methods.
arXiv Detail & Related papers (2021-06-10T07:33:19Z) - Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review [1.4650545418986058]
A promising alternative, self-supervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling.
This literature review aims to provide an up-to-date analysis of the efforts of researchers to understand the key components and the limitations of self-supervised learning.
arXiv Detail & Related papers (2021-06-06T21:59:49Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Combining Self-Training and Self-Supervised Learning for Unsupervised
Disfluency Detection [80.68446022994492]
In this work, we explore the unsupervised learning paradigm which can potentially work with unlabeled text corpora.
Our model builds upon the recent work on Noisy Student Training, a semi-supervised learning approach that extends the idea of self-training.
arXiv Detail & Related papers (2020-10-29T05:29:26Z) - Self-Supervised Learning Across Domains [33.86614301708017]
We propose to apply a similar approach to the problem of object recognition across domains.
Our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals on the same images.
This secondary task helps the network to focus on object shapes, learning concepts like spatial orientation and part correlation, while acting as a regularizer for the classification task.
arXiv Detail & Related papers (2020-07-24T06:19:53Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.