Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning
- URL: http://arxiv.org/abs/2303.00996v1
- Date: Thu, 2 Mar 2023 06:10:13 GMT
- Title: Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning
- Authors: Huiwon Jang, Hankook Lee, Jinwoo Shin
- Abstract summary: We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
- Score: 72.3506897990639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised meta-learning aims to learn generalizable knowledge across a
distribution of tasks constructed from unlabeled data. Here, the main challenge
is how to construct diverse tasks for meta-learning without label information;
recent works have proposed to create, e.g., pseudo-labeling via pretrained
representations or creating synthetic samples via generative models. However,
such a task construction strategy is fundamentally limited due to heavy
reliance on the immutable pseudo-labels during meta-learning and the quality of
the representations or the generated samples. To overcome the limitations, we
propose a simple yet effective unsupervised meta-learning framework, coined
Pseudo-supervised Contrast (PsCo), for few-shot classification. We are inspired
by the recent self-supervised learning literature; PsCo utilizes a momentum
network and a queue of previous batches to improve pseudo-labeling and
construct diverse tasks in a progressive manner. Our extensive experiments
demonstrate that PsCo outperforms existing unsupervised meta-learning methods
under various in-domain and cross-domain few-shot classification benchmarks. We
also validate that PsCo is easily scalable to a large-scale benchmark, while
recent prior-art meta-schemes are not.
Related papers
- Learning Transferable Adversarial Robust Representations via Multi-view
Consistency [57.73073964318167]
We propose a novel meta-adversarial multi-view representation learning framework with dual encoders.
We demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains.
arXiv Detail & Related papers (2022-10-19T11:48:01Z) - A Weakly Supervised Learning Framework for Salient Object Detection via
Hybrid Labels [96.56299163691979]
This paper focuses on a new weakly-supervised salient object detection (SOD) task under hybrid labels.
To address the issues of label noise and quantity imbalance in this task, we design a new pipeline framework with three sophisticated training strategies.
Experiments on five SOD benchmarks show that our method achieves competitive performance against weakly-supervised/unsupervised methods.
arXiv Detail & Related papers (2022-09-07T06:45:39Z) - Contrastive Knowledge-Augmented Meta-Learning for Few-Shot
Classification [28.38744876121834]
We introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a novel approach for knowledge-enhanced few-shot learning.
We evaluate the performance of CAML in different few-shot learning scenarios.
arXiv Detail & Related papers (2022-07-25T17:01:29Z) - Active Refinement for Multi-Label Learning: A Pseudo-Label Approach [84.52793080276048]
Multi-label learning (MLL) aims to associate a given instance with its relevant labels from a set of concepts.
Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed.
Many real-world applications require introducing new concepts into the set to meet new demands.
arXiv Detail & Related papers (2021-09-29T19:17:05Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Self-Supervised Prototypical Transfer Learning for Few-Shot
Classification [11.96734018295146]
Self-supervised transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks.
In few-shot experiments with domain shift, our approach even has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
arXiv Detail & Related papers (2020-06-19T19:00:11Z) - Unsupervised Meta-Learning through Latent-Space Interpolation in
Generative Models [11.943374020641214]
We describe an approach that generates meta-tasks using generative models.
We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines.
arXiv Detail & Related papers (2020-06-18T02:10:56Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.