GDC- Generalized Distribution Calibration for Few-Shot Learning
- URL: http://arxiv.org/abs/2204.05230v1
- Date: Mon, 11 Apr 2022 16:22:53 GMT
- Title: GDC- Generalized Distribution Calibration for Few-Shot Learning
- Authors: Shakti Kumar, Hussain Zaidi
- Abstract summary: Few shot learning is an important problem in machine learning as large labelled datasets take considerable time and effort to assemble.
Most few-shot learning algorithms suffer from one of two limitations- they either require the design of sophisticated models and loss functions, thus hampering interpretability.
We propose a Generalized sampling method that learns to estimate few-shot distributions for classification as weighted random variables of all large classes.
- Score: 5.076419064097734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few shot learning is an important problem in machine learning as large
labelled datasets take considerable time and effort to assemble. Most few-shot
learning algorithms suffer from one of two limitations- they either require the
design of sophisticated models and loss functions, thus hampering
interpretability; or employ statistical techniques but make assumptions that
may not hold across different datasets or features. Developing on recent work
in extrapolating distributions of small sample classes from the most similar
larger classes, we propose a Generalized sampling method that learns to
estimate few-shot distributions for classification as weighted random variables
of all large classes. We use a form of covariance shrinkage to provide
robustness against singular covariances due to overparameterized features or
small datasets. We show that our sampled points are close to few-shot classes
even in cases when there are no similar large classes in the training set. Our
method works with arbitrary off-the-shelf feature extractors and outperforms
existing state-of-the-art on miniImagenet, CUB and Stanford Dogs datasets by 3%
to 5% on 5way-1shot and 5way-5shot tasks and by 1% in challenging cross domain
tasks.
Related papers
- Calibrating Higher-Order Statistics for Few-Shot Class-Incremental Learning with Pre-trained Vision Transformers [12.590571371294729]
Few-shot class-incremental learning (FSCIL) aims to adapt the model to new classes from very few data (5 samples) without forgetting the previously learned classes.
Recent works in many-shot CIL (MSCIL) exploited pre-trained models to reduce forgetting and achieve better plasticity.
We use ViT models pre-trained on large-scale datasets for few-shot settings, which face the critical issue of low plasticity.
arXiv Detail & Related papers (2024-04-09T21:12:31Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Collaborative Learning with Different Labeling Functions [7.228285747845779]
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions.
We show that, when the data distributions satisfy a weaker realizability assumption, sample-efficient learning is still feasible.
arXiv Detail & Related papers (2024-02-16T04:32:22Z) - Less is More: On the Feature Redundancy of Pretrained Models When
Transferring to Few-shot Tasks [120.23328563831704]
Transferring a pretrained model to a downstream task can be as easy as conducting linear probing with target data.
We show that, for linear probing, the pretrained features can be extremely redundant when the downstream data is scarce.
arXiv Detail & Related papers (2023-10-05T19:00:49Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Ortho-Shot: Low Displacement Rank Regularization with Data Augmentation
for Few-Shot Learning [23.465747123791772]
In few-shot classification, the primary goal is to learn representations that generalize well for novel classes.
We propose an efficient low displacement rank (LDR) regularization strategy termed Ortho-Shot.
arXiv Detail & Related papers (2021-10-18T14:58:36Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.