Recent Advancements in Self-Supervised Paradigms for Visual Feature
Representation
- URL: http://arxiv.org/abs/2111.02042v1
- Date: Wed, 3 Nov 2021 07:02:34 GMT
- Title: Recent Advancements in Self-Supervised Paradigms for Visual Feature
Representation
- Authors: Mrinal Anand, Aditya Garg
- Abstract summary: Supervised learning requires a large amount of labeled data to reach state-of-the-art performance.
To avoid the cost of labeling data, self-supervised methods were proposed to make use of largely available unlabeled data.
This study conducts a comprehensive and insightful survey and analysis of recent developments in the self-supervised paradigm for feature representation.
- Score: 0.41436032949434404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We witnessed a massive growth in the supervised learning paradigm in the past
decade. Supervised learning requires a large amount of labeled data to reach
state-of-the-art performance. However, labeling the samples requires a lot of
human annotation. To avoid the cost of labeling data, self-supervised methods
were proposed to make use of largely available unlabeled data. This study
conducts a comprehensive and insightful survey and analysis of recent
developments in the self-supervised paradigm for feature representation. In
this paper, we investigate the factors affecting the usefulness of
self-supervision under different settings. We present some of the key insights
concerning two different approaches in self-supervision, generative and
contrastive methods. We also investigate the limitations of supervised
adversarial training and how self-supervision can help overcome those
limitations. We then move on to discuss the limitations and challenges in
effectively using self-supervision for visual tasks. Finally, we highlight some
open problems and point out future research directions.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models [7.742594744641462]
Machine unlearning aims to remove information derived from forgotten data while preserving that of the remaining dataset in a well-trained model.
We propose a supervision-free unlearning approach that operates without the need for labels during the unlearning process.
arXiv Detail & Related papers (2024-03-31T00:29:00Z) - Semi-Supervised and Unsupervised Deep Visual Learning: A Survey [76.2650734930974]
Semi-supervised learning and unsupervised learning offer promising paradigms to learn from an abundance of unlabeled visual data.
We review the recent advanced deep learning algorithms on semi-supervised learning (SSL) and unsupervised learning (UL) for visual recognition from a unified perspective.
arXiv Detail & Related papers (2022-08-24T04:26:21Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Attention-based Contrastive Learning for Winograd Schemas [27.11678023496321]
This paper investigates whether contrastive learning can be extended to Transfomer attention to tackle the Winograd Challenge.
We propose a novel self-supervised framework, leveraging a contrastive loss directly at the level of self-attention.
arXiv Detail & Related papers (2021-09-10T21:10:22Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - A Simple and Effective Self-Supervised Contrastive Learning Framework
for Aspect Detection [15.36713547251997]
We propose a self-supervised contrastive learning framework and an attention-based model equipped with a novel smooth self-attention (SSA) module for the UAD task.
Our methods outperform several recent unsupervised and weakly supervised approaches on publicly available benchmark user review datasets.
arXiv Detail & Related papers (2020-09-18T22:13:49Z) - Self-supervised Learning: Generative or Contrastive [16.326494162366973]
Self-supervised learning has soaring performance on representation learning in the last several years.
We take a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning.
arXiv Detail & Related papers (2020-06-15T08:40:03Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.