Diversity Helps: Unsupervised Few-shot Learning via Distribution
Shift-based Data Augmentation
- URL: http://arxiv.org/abs/2004.05805v2
- Date: Thu, 17 Sep 2020 06:00:36 GMT
- Title: Diversity Helps: Unsupervised Few-shot Learning via Distribution
Shift-based Data Augmentation
- Authors: Tiexin Qin and Wenbin Li and Yinghuan Shi and Yang Gao
- Abstract summary: Few-shot learning aims to learn a new concept when only a few training examples are available.
In this paper, we develop a novel framework called Unsupervised Few-shot Learning via Distribution Shift-based Data Augmentation.
In experiments, few-shot models learned by ULDA can achieve superior generalization performance.
- Score: 21.16237189370515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning aims to learn a new concept when only a few training
examples are available, which has been extensively explored in recent years.
However, most of the current works heavily rely on a large-scale labeled
auxiliary set to train their models in an episodic-training paradigm. Such a
kind of supervised setting basically limits the widespread use of few-shot
learning algorithms. Instead, in this paper, we develop a novel framework
called Unsupervised Few-shot Learning via Distribution Shift-based Data
Augmentation (ULDA), which pays attention to the distribution diversity inside
each constructed pretext few-shot task when using data augmentation.
Importantly, we highlight the value and importance of the distribution
diversity in the augmentation-based pretext few-shot tasks, which can
effectively alleviate the overfitting problem and make the few-shot model learn
more robust feature representations. In ULDA, we systemically investigate the
effects of different augmentation techniques and propose to strengthen the
distribution diversity (or difference) between the query set and support set in
each few-shot task, by augmenting these two sets diversely (i.e., distribution
shifting). In this way, even incorporated with simple augmentation techniques
(e.g., random crop, color jittering, or rotation), our ULDA can produce a
significant improvement. In the experiments, few-shot models learned by ULDA
can achieve superior generalization performance and obtain state-of-the-art
results in a variety of established few-shot learning tasks on Omniglot and
miniImageNet. The source code is available in
https://github.com/WonderSeven/ULDA.
Related papers
- Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles [95.49699178874683]
We propose DiffDiv, an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs)
We show that DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features.
We show that DPM-guided diversification is sufficient to remove dependence on shortcut cues, without a need for additional supervised signals.
arXiv Detail & Related papers (2023-11-23T15:47:33Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - Detail Reinforcement Diffusion Model: Augmentation Fine-Grained Visual Categorization in Few-Shot Conditions [11.121652649243119]
Diffusion models have been widely adopted in data augmentation due to their outstanding diversity in data generation.
We propose a novel approach termed the detail reinforcement diffusion model(DRDM)
It leverages the rich knowledge of large models for fine-grained data augmentation and comprises two key components including discriminative semantic recombination (DSR) and spatial knowledge reference(SKR)
arXiv Detail & Related papers (2023-09-15T01:28:59Z) - An Efficient General-Purpose Modular Vision Model via Multi-Task
Heterogeneous Training [79.78201886156513]
We present a model that can perform multiple vision tasks and can be adapted to other downstream tasks efficiently.
Our approach achieves comparable results to single-task state-of-the-art models and demonstrates strong generalization on downstream tasks.
arXiv Detail & Related papers (2023-06-29T17:59:57Z) - Generalizable Low-Resource Activity Recognition with Diverse and
Discriminative Representation Learning [24.36351102003414]
Human activity recognition (HAR) is a time series classification task that focuses on identifying the motion patterns from human sensor readings.
We propose a novel approach called Diverse and Discriminative representation Learning (DDLearn) for generalizable lowresource HAR.
Our method significantly outperforms state-of-art methods by an average accuracy improvement of 9.5%.
arXiv Detail & Related papers (2023-05-25T08:24:22Z) - GDC- Generalized Distribution Calibration for Few-Shot Learning [5.076419064097734]
Few shot learning is an important problem in machine learning as large labelled datasets take considerable time and effort to assemble.
Most few-shot learning algorithms suffer from one of two limitations- they either require the design of sophisticated models and loss functions, thus hampering interpretability.
We propose a Generalized sampling method that learns to estimate few-shot distributions for classification as weighted random variables of all large classes.
arXiv Detail & Related papers (2022-04-11T16:22:53Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Exploring the Diversity and Invariance in Yourself for Visual
Pre-Training Task [192.74445148376037]
Self-supervised learning methods have achieved remarkable success in visual pre-training task.
These methods only focus on limited regions or the extracted features on totally different regions inside each image are nearly the same.
This paper introduces Exploring the Diversity and Invariance in Yourself E-DIY.
arXiv Detail & Related papers (2021-06-01T14:52:36Z) - MetaKernel: Learning Variational Random Features with Limited Labels [120.90737681252594]
Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks.
We propose meta-learning kernels with random Fourier features for few-shot learning, we call Meta Kernel.
arXiv Detail & Related papers (2021-05-08T21:24:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.