Semi-Supervised Learning of Visual Features by Non-Parametrically
Predicting View Assignments with Support Samples
- URL: http://arxiv.org/abs/2104.13963v1
- Date: Wed, 28 Apr 2021 18:44:07 GMT
- Title: Semi-Supervised Learning of Visual Features by Non-Parametrically
Predicting View Assignments with Support Samples
- Authors: Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Armand
Joulin, Nicolas Ballas, Michael Rabbat
- Abstract summary: This paper proposes a novel method of learning by predicting view assignments with support samples.
The method trains a model to minimize a consistency loss, which ensures that different views of the same unlabeled instance are assigned similar pseudo-labels.
The pseudo-labels are generated non-parametrically, by comparing the representations of the image views to those of a set of randomly sampled labeled images.
- Score: 45.32502589149226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel method of learning by predicting view assignments
with support samples (PAWS). The method trains a model to minimize a
consistency loss, which ensures that different views of the same unlabeled
instance are assigned similar pseudo-labels. The pseudo-labels are generated
non-parametrically, by comparing the representations of the image views to
those of a set of randomly sampled labeled images. The distance between the
view representations and labeled representations is used to provide a weighting
over class labels, which we interpret as a soft pseudo-label. By
non-parametrically incorporating labeled samples in this way, PAWS extends the
distance-metric loss used in self-supervised methods such as BYOL and SwAV to
the semi-supervised setting. Despite the simplicity of the approach, PAWS
outperforms other semi-supervised methods across architectures, setting a new
state-of-the-art for a ResNet-50 on ImageNet trained with either 10% or 1% of
the labels, reaching 75.5% and 66.5% top-1 respectively. PAWS requires 4x to
12x less training than the previous best methods.
Related papers
- VLM-CPL: Consensus Pseudo Labels from Vision-Language Models for Human Annotation-Free Pathological Image Classification [23.08368823707528]
We present a novel human annotation-free method for pathology image classification by leveraging pre-trained Vision-Language Models (VLMs)
We introduce VLM-CPL, a novel approach based on consensus pseudo labels that integrates two noisy label filtering techniques with a semi-supervised learning strategy.
Experimental results showed that our method obtained an accuracy of 87.1% and 95.1% on the HPH and LC25K datasets, respectively.
arXiv Detail & Related papers (2024-03-23T13:24:30Z) - Semi-Supervised Learning for hyperspectral images by non parametrically
predicting view assignment [25.198550162904713]
Hyperspectral image (HSI) classification is gaining a lot of momentum in present time because of high inherent spectral information within the images.
Recently, to effectively train the deep learning models with minimal labelled samples, the unlabeled samples are also being leveraged in self-supervised and semi-supervised setting.
In this work, we leverage the idea of semi-supervised learning to assist the discriminative self-supervised pretraining of the models.
arXiv Detail & Related papers (2023-06-19T14:13:56Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Seed the Views: Hierarchical Semantic Alignment for Contrastive
Representation Learning [116.91819311885166]
We propose a hierarchical semantic alignment strategy via expanding the views generated by a single image to textbfCross-samples and Multi-level representation.
Our method, termed as CsMl, has the ability to integrate multi-level visual representations across samples in a robust way.
arXiv Detail & Related papers (2020-12-04T17:26:24Z) - Center-wise Local Image Mixture For Contrastive Representation Learning [37.806687971373954]
Contrastive learning based on instance discrimination trains model to discriminate different transformations of the anchor sample from other samples.
This paper proposes a new kind of contrastive learning method, named CLIM, which uses positives from other samples in the dataset.
We reach 75.5% top-1 accuracy with linear evaluation over ResNet-50, and 59.3% top-1 accuracy when fine-tuned with only 1% labels.
arXiv Detail & Related papers (2020-11-05T08:20:31Z) - Unsupervised Representation Learning by InvariancePropagation [34.53866045440319]
In this paper, we propose Invariance propagation to focus on learning representations invariant to category-level variations.
With a ResNet-50 as the backbone, our method achieves 71.3% top-1 accuracy on ImageNet linear classification and 78.2% top-5 accuracy fine-tuning on only 1% labels.
We also achieve state-of-the-art performance on other downstream tasks, including linear classification on Places205 and Pascal VOC, and transfer learning on small scale datasets.
arXiv Detail & Related papers (2020-10-07T13:00:33Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z) - Enhancing Few-Shot Image Classification with Unlabelled Examples [18.03136114355549]
We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance.
Our approach combines a regularized neural adaptive feature extractor to achieve improved test-time classification accuracy using unlabelled data.
arXiv Detail & Related papers (2020-06-17T05:42:47Z) - MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative
Adversarial Network [51.84251358009803]
We present a novel self-supervised learning approach for conditional generative adversarial networks (GANs) under a semi-supervised setting.
We perform augmentation by randomly sampling sensible labels from the label space of the few labelled examples available.
Our method surpasses the baseline with only 20% of the labelled examples used to train the baseline.
arXiv Detail & Related papers (2020-06-11T17:14:55Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.