Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review
- URL: http://arxiv.org/abs/2106.03259v1
- Date: Sun, 6 Jun 2021 21:59:49 GMT
- Title: Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review
- Authors: Ran Liu
- Abstract summary: A promising alternative, self-supervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling.
This literature review aims to provide an up-to-date analysis of the efforts of researchers to understand the key components and the limitations of self-supervised learning.
- Score: 1.4650545418986058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional supervised learning methods are hitting a bottleneck because of
their dependency on expensive manually labeled data and their weaknesses such
as limited generalization ability and vulnerability to adversarial attacks. A
promising alternative, self-supervised learning, as a type of unsupervised
learning, has gained popularity because of its potential to learn effective
data representations without manual labeling. Among self-supervised learning
algorithms, contrastive learning has achieved state-of-the-art performance in
several fields of research. This literature review aims to provide an
up-to-date analysis of the efforts of researchers to understand the key
components and the limitations of self-supervised learning.
Related papers
- A review on discriminative self-supervised learning methods [6.24302896438145]
Self-supervised learning has emerged as a method to extract robust features from unlabeled data.
This paper provides a review of discriminative approaches of self-supervised learning within the domain of computer vision.
arXiv Detail & Related papers (2024-05-08T11:15:20Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Balancing Continual Learning and Fine-tuning for Human Activity
Recognition [21.361301806478643]
Wearable-based Human Activity Recognition (HAR) is a key task in human-centric machine learning.
This work explores the adoption and adaptation of CaSSLe, a continual self-supervised learning model.
We also investigated the importance of different loss terms and explored the trade-off between knowledge retention and learning from new tasks.
arXiv Detail & Related papers (2024-01-04T13:11:43Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Unleash Model Potential: Bootstrapped Meta Self-supervised Learning [12.57396771974944]
Long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision.
Self-supervised learning and meta-learning are two promising techniques to achieve this goal, but they both only partially capture the advantages.
We propose a novel Bootstrapped Meta Self-Supervised Learning framework that aims to simulate the human learning process.
arXiv Detail & Related papers (2023-08-28T02:49:07Z) - Can Self-Supervised Representation Learning Methods Withstand
Distribution Shifts and Corruptions? [5.706184197639971]
Self-supervised learning in computer vision aims to leverage the inherent structure and relationships within data to learn meaningful representations.
This work investigates the robustness of learned representations of self-supervised learning approaches focusing on distribution shifts and image corruptions.
arXiv Detail & Related papers (2023-07-31T13:07:56Z) - Accelerating Self-Supervised Learning via Efficient Training Strategies [98.26556609110992]
Time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts.
Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods.
arXiv Detail & Related papers (2022-12-11T21:49:39Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Combining Self-Training and Self-Supervised Learning for Unsupervised
Disfluency Detection [80.68446022994492]
In this work, we explore the unsupervised learning paradigm which can potentially work with unlabeled text corpora.
Our model builds upon the recent work on Noisy Student Training, a semi-supervised learning approach that extends the idea of self-training.
arXiv Detail & Related papers (2020-10-29T05:29:26Z) - Self-supervised Learning: Generative or Contrastive [16.326494162366973]
Self-supervised learning has soaring performance on representation learning in the last several years.
We take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning.
arXiv Detail & Related papers (2020-06-15T08:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.