Self-Supervised Learning for Large-Scale Unsupervised Image Clustering
- URL: http://arxiv.org/abs/2008.10312v2
- Date: Mon, 9 Nov 2020 16:14:04 GMT
- Title: Self-Supervised Learning for Large-Scale Unsupervised Image Clustering
- Authors: Evgenii Zheltonozhskii, Chaim Baskin, Alex M. Bronstein, Avi Mendelson
- Abstract summary: We propose a simple scheme for unsupervised classification based on self-supervised representations.
We evaluate the proposed approach with several recent self-supervised methods showing that it achieves competitive results for ImageNet classification.
We suggest adding the unsupervised evaluation to a set of standard benchmarks for self-supervised learning.
- Score: 8.142434527938535
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unsupervised learning has always been appealing to machine learning
researchers and practitioners, allowing them to avoid an expensive and
complicated process of labeling the data. However, unsupervised learning of
complex data is challenging, and even the best approaches show much weaker
performance than their supervised counterparts. Self-supervised deep learning
has become a strong instrument for representation learning in computer vision.
However, those methods have not been evaluated in a fully unsupervised setting.
In this paper, we propose a simple scheme for unsupervised classification based
on self-supervised representations. We evaluate the proposed approach with
several recent self-supervised methods showing that it achieves competitive
results for ImageNet classification (39% accuracy on ImageNet with 1000
clusters and 46% with overclustering). We suggest adding the unsupervised
evaluation to a set of standard benchmarks for self-supervised learning. The
code is available at https://github.com/Randl/kmeans_selfsuper
Related papers
- Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - How Well Do Self-Supervised Methods Perform in Cross-Domain Few-Shot
Learning? [17.56019071385342]
Cross-domain few-shot learning (CDFSL) remains a largely unsolved problem in the area of computer vision.
We investigate the role of self-supervised representation learning in the context of CDFSL via a thorough evaluation of existing methods.
We find that representations extracted from self-supervised methods exhibit stronger robustness than the supervised method.
arXiv Detail & Related papers (2022-02-18T04:03:53Z) - Unsupervised Clustering Active Learning for Person Re-identification [5.705895028045853]
Unsupervised re-id methods rely on unlabeled data to train models.
We present a Unsupervised Clustering Active Learning (UCAL) re-id deep learning approach.
It is capable of incrementally discovering the representative centroid-pairs.
arXiv Detail & Related papers (2021-12-26T02:54:35Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Hybrid Dynamic Contrast and Probability Distillation for Unsupervised
Person Re-Id [109.1730454118532]
Unsupervised person re-identification (Re-Id) has attracted increasing attention due to its practical application in the read-world video surveillance system.
We present the hybrid dynamic cluster contrast and probability distillation algorithm.
It formulates the unsupervised Re-Id problem into an unified local-to-global dynamic contrastive learning and self-supervised probability distillation framework.
arXiv Detail & Related papers (2021-09-29T02:56:45Z) - How Well Do Self-Supervised Models Transfer? [92.16372657233394]
We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks.
We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition.
No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved.
arXiv Detail & Related papers (2020-11-26T16:38:39Z) - Unsupervised Image Classification for Deep Representation Learning [42.09716669386924]
We propose an unsupervised image classification framework without using embedding clustering.
Experiments on ImageNet dataset have been conducted to prove the effectiveness of our method.
arXiv Detail & Related papers (2020-06-20T02:57:06Z) - Self-Supervised Viewpoint Learning From Image Collections [116.56304441362994]
We propose a novel learning framework which incorporates an analysis-by-synthesis paradigm to reconstruct images in a viewpoint aware manner.
We show that our approach performs competitively to fully-supervised approaches for several object categories like human faces, cars, buses, and trains.
arXiv Detail & Related papers (2020-04-03T22:01:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.