Weakly Supervised Contrastive Learning
- URL: http://arxiv.org/abs/2110.04770v1
- Date: Sun, 10 Oct 2021 12:03:52 GMT
- Title: Weakly Supervised Contrastive Learning
- Authors: Mingkai Zheng, Fei Wang, Shan You, Chen Qian, Changshui Zhang,
Xiaogang Wang, Chang Xu
- Abstract summary: We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
- Score: 68.47096022526927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised visual representation learning has gained much attention from
the computer vision community because of the recent achievement of contrastive
learning. Most of the existing contrastive learning frameworks adopt the
instance discrimination as the pretext task, which treating every single
instance as a different class. However, such method will inevitably cause class
collision problems, which hurts the quality of the learned representation.
Motivated by this observation, we introduced a weakly supervised contrastive
learning framework (WCL) to tackle this issue. Specifically, our proposed
framework is based on two projection heads, one of which will perform the
regular instance discrimination task. The other head will use a graph-based
method to explore similar samples and generate a weak label, then perform a
supervised contrastive learning task based on the weak label to pull the
similar images closer. We further introduced a K-Nearest Neighbor based
multi-crop strategy to expand the number of positive samples. Extensive
experimental results demonstrate WCL improves the quality of self-supervised
representations across different datasets. Notably, we get a new
state-of-the-art result for semi-supervised learning. With only 1\% and 10\%
labeled examples, WCL achieves 65\% and 72\% ImageNet Top-1 Accuracy using
ResNet50, which is even higher than SimCLRv2 with ResNet101.
Related papers
- LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations [4.680881326162484]
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection.
A common augmentation technique in contrastive learning is random cropping followed by resizing.
We introduce LeOCLR, a framework that employs a novel instance discrimination approach and an adapted loss function.
arXiv Detail & Related papers (2024-03-11T15:33:32Z) - CUCL: Codebook for Unsupervised Continual Learning [129.91731617718781]
The focus of this study is on Unsupervised Continual Learning (UCL), as it presents an alternative to Supervised Continual Learning.
We propose a method named Codebook for Unsupervised Continual Learning (CUCL) which promotes the model to learn discriminative features to complete the class boundary.
Our method significantly boosts the performances of supervised and unsupervised methods.
arXiv Detail & Related papers (2023-11-25T03:08:50Z) - Semantic Positive Pairs for Enhancing Visual Representation Learning of Instance Discrimination methods [4.680881326162484]
Self-supervised learning algorithms (SSL) based on instance discrimination have shown promising results.
We propose an approach to identify those images with similar semantic content and treat them as positive instances.
We run experiments on three benchmark datasets: ImageNet, STL-10 and CIFAR-10 with different instance discrimination SSL approaches.
arXiv Detail & Related papers (2023-06-28T11:47:08Z) - Unsupervised Visual Representation Learning by Synchronous Momentum
Grouping [47.48803765951601]
Group-level contrastive visual representation learning method on ImageNet surpasses vanilla supervised learning.
We conduct exhaustive experiments to show that SMoG has surpassed the current SOTA unsupervised representation learning methods.
arXiv Detail & Related papers (2022-07-13T13:04:15Z) - Clustering by Maximizing Mutual Information Across Views [62.21716612888669]
We propose a novel framework for image clustering that incorporates joint representation learning and clustering.
Our method significantly outperforms state-of-the-art single-stage clustering methods across a variety of image datasets.
arXiv Detail & Related papers (2021-07-24T15:36:49Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - Train a One-Million-Way Instance Classifier for Unsupervised Visual
Representation Learning [45.510042484456854]
This paper presents a simple unsupervised visual representation learning method with a pretext task of discriminating all images in a dataset using a parametric, instance-level computation.
The overall framework is a replica of a supervised classification model, where semantic classes (e.g., dog, bird, and ship) are replaced by instance IDs.
scaling up the classification task from thousands of semantic labels to millions of instance labels brings specific challenges including 1) the large-scale softmax classifier; 2) the slow convergence due to the infrequent visiting of instance samples; and 3) the massive number of negative classes that can be noisy.
arXiv Detail & Related papers (2021-02-09T14:44:18Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.