How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot
Learning?
- URL: http://arxiv.org/abs/2202.09014v1
- Date: Fri, 18 Feb 2022 04:03:53 GMT
- Title: How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot
Learning?
- Authors: Yiyi Zhang, Ying Zheng, Xiaogang Xu, Jun Wang
- Abstract summary: Cross-domain few-shot learning (CDFSL) remains a largely unsolved problem in the area of computer vision.
We investigate the role of self-supervised representation learning in the context of CDFSL via a thorough evaluation of existing methods.
We find that representations extracted from self-supervised methods exhibit stronger robustness than the supervised method.
- Score: 17.56019071385342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain few-shot learning (CDFSL) remains a largely unsolved problem in
the area of computer vision, while self-supervised learning presents a
promising solution. Both learning methods attempt to alleviate the dependency
of deep networks on the requirement of large-scale labeled data. Although
self-supervised methods have recently advanced dramatically, their utility on
CDFSL is relatively unexplored. In this paper, we investigate the role of
self-supervised representation learning in the context of CDFSL via a thorough
evaluation of existing methods. It comes as a surprise that even with shallow
architectures or small training datasets, self-supervised methods can perform
favorably compared to the existing SOTA methods. Nevertheless, no single
self-supervised approach dominates all datasets indicating that existing
self-supervised methods are not universally applicable. In addition, we find
that representations extracted from self-supervised methods exhibit stronger
robustness than the supervised method. Intriguingly, whether self-supervised
representations perform well on the source domain has little correlation with
their applicability on the target domain. As part of our study, we conduct an
objective measurement of the performance for six kinds of representative
classifiers. The results suggest Prototypical Classifier as the standard
evaluation recipe for CDFSL.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Unsupervised Embedding Quality Evaluation [6.72542623686684]
SSL models are often unclear whether they will perform well when transferred to another domain.
Can we quantify how easy it is to linearly separate the data in a stable way?
We introduce one novel method based on recent advances in understanding the high-dimensional geometric structure of self-supervised learning.
arXiv Detail & Related papers (2023-05-26T01:06:44Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - Self-Supervised Learning for Large-Scale Unsupervised Image Clustering [8.142434527938535]
We propose a simple scheme for unsupervised classification based on self-supervised representations.
We evaluate the proposed approach with several recent self-supervised methods showing that it achieves competitive results for ImageNet classification.
We suggest adding the unsupervised evaluation to a set of standard benchmarks for self-supervised learning.
arXiv Detail & Related papers (2020-08-24T10:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.