Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods
- URL: http://arxiv.org/abs/2206.01251v2
- Date: Tue, 14 Nov 2023 20:25:21 GMT
- Title: Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods
- Authors: Yuchen Lu, Zhen Liu, Aristide Baratin, Romain Laroche, Aaron
Courville, Alessandro Sordoni
- Abstract summary: We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
- Score: 61.49061000562676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of evaluating the quality of self-supervised learning
(SSL) models without access to supervised labels, while being agnostic to the
architecture, learning algorithm or data manipulation used during training. We
argue that representations can be evaluated through the lens of expressiveness
and learnability. We propose to use the Intrinsic Dimension (ID) to assess
expressiveness and introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN classifier trained to
predict labels obtained by clustering the representations with K-means. We thus
combine CL and ID into a single predictor -- CLID. Through a large-scale
empirical study with a diverse family of SSL algorithms, we find that CLID
better correlates with in-distribution model performance than other competing
recent evaluation schemes. We also benchmark CLID on out-of-domain
generalization, where CLID serves as a predictor of the transfer performance of
SSL models on several visual classification tasks, yielding improvements with
respect to the competing baselines.
Related papers
- Adaptive Self-supervised Robust Clustering for Unstructured Data with Unknown Cluster Number [12.926206811876174]
We introduce a novel self-supervised deep clustering approach tailored for unstructured data, termed Adaptive Self-supervised Robust Clustering (ASRC)
ASRC adaptively learns the graph structure and edge weights to capture both local and global structural information.
ASRC even outperforms methods that rely on prior knowledge of the number of clusters, highlighting its effectiveness in addressing the challenges of clustering unstructured data.
arXiv Detail & Related papers (2024-07-29T15:51:09Z) - What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Prototypical Contrastive Learning-based CLIP Fine-tuning for Object
Re-identification [13.090873217313732]
This work aims to adapt large-scale pre-trained vision-language models, such as contrastive language-image pretraining (CLIP), to enhance the performance of object reidentification (Re-ID)
We first analyze the role prompt learning in CLIP-ReID and identify its limitations.
Our approach directly fine-tunes the image encoder of CLIP using a prototypical contrastive learning (PCL) loss, eliminating the need for prompt learning.
arXiv Detail & Related papers (2023-10-26T08:12:53Z) - Learning Deep Representations via Contrastive Learning for Instance
Retrieval [11.736450745549792]
This paper makes the first attempt that tackles the problem using instance-discrimination based contrastive learning (CL)
In this work, we approach this problem by exploring the capability of deriving discriminative representations from pre-trained and fine-tuned CL models.
arXiv Detail & Related papers (2022-09-28T04:36:34Z) - Representation Learning via Consistent Assignment of Views to Clusters [0.7614628596146599]
Consistent Assignment for Representation Learning (CARL) is an unsupervised learning method to learn visual representations.
By viewing contrastive learning from a clustering perspective, CARL learns unsupervised representations by learning a set of general prototypes.
Unlike contemporary work on contrastive learning with deep clustering, CARL proposes to learn the set of general prototypes in an online fashion.
arXiv Detail & Related papers (2021-12-31T12:59:23Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Supporting Clustering with Contrastive Learning [19.71262627336737]
Unsupervised clustering aims at discovering semantic categories of data according to some distance measured in the representation space.
Different categories often overlap with each other in the representation space at the beginning of the learning process.
We propose Supporting Clustering with Contrastive Learning -- a novel framework to leverage contrastive learning to promote better separation.
arXiv Detail & Related papers (2021-03-24T03:05:17Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.