Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images
- URL: http://arxiv.org/abs/2105.11874v1
- Date: Tue, 25 May 2021 12:22:11 GMT
- Title: Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images
- Authors: Wentao Chen, Chenyang Si, Wei Wang, Liang Wang, Zilei Wang, Tieniu Tan
- Abstract summary: We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
- Score: 79.34600869202373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning is a challenging task since only few instances are given
for recognizing an unseen class. One way to alleviate this problem is to
acquire a strong inductive bias via meta-learning on similar tasks. In this
paper, we show that such inductive bias can be learned from a flat collection
of unlabeled images, and instantiated as transferable representations among
seen and unseen classes. Specifically, we propose a novel part-based
self-supervised representation learning scheme to learn transferable
representations by maximizing the similarity of an image to its discriminative
part. To mitigate the overfitting in few-shot classification caused by data
scarcity, we further propose a part augmentation strategy by retrieving extra
images from a base dataset. We conduct systematic studies on miniImageNet and
tieredImageNet benchmarks. Remarkably, our method yields impressive results,
outperforming the previous best unsupervised methods by 7.74% and 9.24% under
5-way 1-shot and 5-way 5-shot settings, which are comparable with
state-of-the-art supervised methods.
Related papers
- LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations [4.680881326162484]
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection.
A common augmentation technique in contrastive learning is random cropping followed by resizing.
We introduce LeOCLR, a framework that employs a novel instance discrimination approach and an adapted loss function.
arXiv Detail & Related papers (2024-03-11T15:33:32Z) - CUCL: Codebook for Unsupervised Continual Learning [129.91731617718781]
The focus of this study is on Unsupervised Continual Learning (UCL), as it presents an alternative to Supervised Continual Learning.
We propose a method named Codebook for Unsupervised Continual Learning (CUCL) which promotes the model to learn discriminative features to complete the class boundary.
Our method significantly boosts the performances of supervised and unsupervised methods.
arXiv Detail & Related papers (2023-11-25T03:08:50Z) - PrototypeFormer: Learning to Explore Prototype Relationships for
Few-shot Image Classification [19.93681871684493]
We propose our method called PrototypeFormer, which aims to significantly advance traditional few-shot image classification approaches.
We utilize a transformer architecture to build a prototype extraction module, aiming to extract class representations that are more discriminative for few-shot classification.
Despite its simplicity, the method performs remarkably well, with no bells and whistles.
arXiv Detail & Related papers (2023-10-05T12:56:34Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - With a Little Help from My Friends: Nearest-Neighbor Contrastive
Learning of Visual Representations [87.72779294717267]
Using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification.
We demonstrate empirically that our method is less reliant on complex data augmentations.
arXiv Detail & Related papers (2021-04-29T17:56:08Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Demystifying Contrastive Self-Supervised Learning: Invariances,
Augmentations and Dataset Biases [34.02639091680309]
Recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class.
We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations.
Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet.
arXiv Detail & Related papers (2020-07-28T00:11:31Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.