Semi-Supervised and Unsupervised Deep Visual Learning: A Survey
- URL: http://arxiv.org/abs/2208.11296v1
- Date: Wed, 24 Aug 2022 04:26:21 GMT
- Title: Semi-Supervised and Unsupervised Deep Visual Learning: A Survey
- Authors: Yanbei Chen, Massimiliano Mancini, Xiatian Zhu, and Zeynep Akata
- Abstract summary: Semi-supervised learning and unsupervised learning offer promising paradigms to learn from an abundance of unlabeled visual data.
We review the recent advanced deep learning algorithms on semi-supervised learning (SSL) and unsupervised learning (UL) for visual recognition from a unified perspective.
- Score: 76.2650734930974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art deep learning models are often trained with a large amount
of costly labeled training data. However, requiring exhaustive manual
annotations may degrade the model's generalizability in the limited-label
regime. Semi-supervised learning and unsupervised learning offer promising
paradigms to learn from an abundance of unlabeled visual data. Recent progress
in these paradigms has indicated the strong benefits of leveraging unlabeled
data to improve model generalization and provide better model initialization.
In this survey, we review the recent advanced deep learning algorithms on
semi-supervised learning (SSL) and unsupervised learning (UL) for visual
recognition from a unified perspective. To offer a holistic understanding of
the state-of-the-art in these areas, we propose a unified taxonomy. We
categorize existing representative SSL and UL with comprehensive and insightful
analysis to highlight their design rationales in different learning scenarios
and applications in different computer vision tasks. Lastly, we discuss the
emerging trends and open challenges in SSL and UL to shed light on future
critical research directions.
Related papers
- A Survey of the Self Supervised Learning Mechanisms for Vision Transformers [5.152455218955949]
The application of self supervised learning (SSL) in vision tasks has gained significant attention.
We develop a comprehensive taxonomy of systematically classifying the SSL techniques.
We discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field.
arXiv Detail & Related papers (2024-08-30T07:38:28Z) - Self-Supervised Skeleton-Based Action Representation Learning: A Benchmark and Beyond [19.074841631219233]
Self-supervised learning (SSL) has been proven effective for skeleton-based action understanding.
In this paper, we conduct a comprehensive survey on self-supervised skeleton-based action representation learning.
arXiv Detail & Related papers (2024-06-05T06:21:54Z) - Heterogeneous Contrastive Learning for Foundation Models and Beyond [73.74745053250619]
In the era of big data and Artificial Intelligence, an emerging paradigm is to utilize contrastive self-supervised learning to model large-scale heterogeneous data.
This survey critically evaluates the current landscape of heterogeneous contrastive learning for foundation models.
arXiv Detail & Related papers (2024-03-30T02:55:49Z) - One-Shot Open Affordance Learning with Foundation Models [54.15857111929812]
We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category.
We propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings.
Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1% of the full training data.
arXiv Detail & Related papers (2023-11-29T16:23:06Z) - Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls
and Opportunities [50.231837687221685]
Self-supervised learning (SSL) has transformed machine learning and its many real world applications.
Unsupervised anomaly detection (AD) has also capitalized on SSL, by self-generating pseudo-anomalies.
arXiv Detail & Related papers (2023-08-28T07:55:01Z) - Unsupervised Representation Learning for Time Series: A Review [20.00853543048447]
Unsupervised representation learning approaches aim to learn discriminative feature representations from unlabeled data, without the requirement of annotating every sample.
We conduct a literature review of existing rapidly evolving unsupervised representation learning approaches for time series.
We empirically evaluate state-of-the-art approaches, especially the rapidly evolving contrastive learning methods, on 9 diverse real-world datasets.
arXiv Detail & Related papers (2023-08-03T07:28:06Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - A Survey on Programmatic Weak Supervision [74.13976343129966]
We give brief introduction of the PWS learning paradigm and review representative approaches for each PWS's learning workflow.
We identify several critical challenges that remain underexplored in the area to hopefully inspire future directions in the field.
arXiv Detail & Related papers (2022-02-11T04:05:38Z) - Self-supervised on Graphs: Contrastive, Generative,or Predictive [25.679620842010422]
Self-supervised learning (SSL) is emerging as a new paradigm for extracting informative knowledge through well-designed pretext tasks.
We divide existing graph SSL methods into three categories: contrastive, generative, and predictive.
We also summarize the commonly used datasets, evaluation metrics, downstream tasks, and open-source implementations of various algorithms.
arXiv Detail & Related papers (2021-05-16T03:30:03Z) - Insights from the Future for Continual Learning [45.58831178202245]
We propose prescient continual learning, a novel experimental setting, to incorporate existing information about the classes, prior to any training data.
Our setting adds future classes, with no training samples at all.
A generative model of the representation space in concert with a careful adjustment of the losses allows us to exploit insights from future classes to constraint the spatial arrangement of the past and current classes.
arXiv Detail & Related papers (2020-06-24T14:05:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.