Self-Supervised Class-Cognizant Few-Shot Classification
- URL: http://arxiv.org/abs/2202.08149v1
- Date: Tue, 15 Feb 2022 15:28:06 GMT
- Title: Self-Supervised Class-Cognizant Few-Shot Classification
- Authors: Ojas Kishore Shirekar, Hadi Jamali-Rad
- Abstract summary: This paper focuses on unsupervised learning from an abundance of unlabeled data.
We extend a recent study on adopting contrastive learning for self-supervised pre-training by incorporating class-level cognizance.
- Score: 2.538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning is argued to be the dark matter of human intelligence.
To build in this direction, this paper focuses on unsupervised learning from an
abundance of unlabeled data followed by few-shot fine-tuning on a downstream
classification task. To this aim, we extend a recent study on adopting
contrastive learning for self-supervised pre-training by incorporating
class-level cognizance through iterative clustering and re-ranking and by
expanding the contrastive optimization loss to account for it. To our
knowledge, our experimentation both in standard and cross-domain scenarios
demonstrate that we set a new state-of-the-art (SoTA) in (5-way, 1 and 5-shot)
settings of standard mini-ImageNet benchmark as well as the (5-way, 5 and
20-shot) settings of cross-domain CDFSL benchmark. Our code and experimentation
can be found in our GitHub repository: https://github.com/ojss/c3lr.
Related papers
- BECLR: Batch Enhanced Contrastive Few-Shot Learning [1.450405446885067]
Unsupervised few-shot learning aspires to bridge this gap by discarding the reliance on annotations at training time.
We propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly separable latent representation space.
We then tackle the, somehow overlooked yet critical, issue of sample bias at the few-shot inference stage.
arXiv Detail & Related papers (2024-02-04T10:52:43Z) - Self-Attention Message Passing for Contrastive Few-Shot Learning [2.1485350418225244]
Unsupervised few-shot learning is the pursuit of bridging this gap between machines and humans.
We propose a novel self-attention based message passing contrastive learning approach (coined as SAMP-CLR) for U-FSL pre-training.
We also propose an optimal transport (OT) based fine-tuning strategy (we call OpT-Tune) to efficiently induce task awareness into our novel end-to-end unsupervised few-shot classification framework (SAMPTransfer)
arXiv Detail & Related papers (2022-10-12T15:57:44Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Few-Shot Segmentation Without Meta-Learning: A Good Transductive
Inference Is All You Need? [34.95314059362982]
We show that the way inference is performed in few-shot segmentation tasks has a substantial effect on performances.
We introduce a transductive inference for a given query image, leveraging the statistics of its unlabeled pixels.
We show that our method brings about 5% and 6% improvements over the state-of-the-art, in the 5- and 10-shot scenarios.
arXiv Detail & Related papers (2020-12-11T07:11:19Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.