RNNP: A Robust Few-Shot Learning Approach
- URL: http://arxiv.org/abs/2011.11067v1
- Date: Sun, 22 Nov 2020 17:23:08 GMT
- Title: RNNP: A Robust Few-Shot Learning Approach
- Authors: Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri
- Abstract summary: We propose a novel robust few-shot learning approach.
Our method relies on generating robust prototypes from a set of few examples.
We evaluate our method on standard mini-ImageNet and tiered-ImageNet datasets.
- Score: 39.8046809855363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning from a few examples is an important practical aspect of training
classifiers. Various works have examined this aspect quite well. However, all
existing approaches assume that the few examples provided are always correctly
labeled. This is a strong assumption, especially if one considers the current
techniques for labeling using crowd-based labeling services. We address this
issue by proposing a novel robust few-shot learning approach. Our method relies
on generating robust prototypes from a set of few examples. Specifically, our
method refines the class prototypes by producing hybrid features from the
support examples of each class. The refined prototypes help to classify the
query images better. Our method can replace the evaluation phase of any
few-shot learning method that uses a nearest neighbor prototype-based
evaluation procedure to make them robust. We evaluate our method on standard
mini-ImageNet and tiered-ImageNet datasets. We perform experiments with various
label corruption rates in the support examples of the few-shot classes. We
obtain significant improvement over widely used few-shot learning methods that
suffer significant performance degeneration in the presence of label noise. We
finally provide extensive ablation experiments to validate our method.
Related papers
- PrototypeFormer: Learning to Explore Prototype Relationships for Few-shot Image Classification [0.6378763934218754]
We propose a novel method called PrototypeFormer, exploring the relationships among category prototypes in the few-shot scenario.
Despite its simplicity, our method performs remarkably well, with no bells and whistles.
Our method surpasses the state-of-the-art results with accuracy of 0.57% and 6.84%, respectively.
arXiv Detail & Related papers (2023-10-05T12:56:34Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - A Simple Approach to Adversarial Robustness in Few-shot Image
Classification [20.889464448762176]
We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers.
We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes.
arXiv Detail & Related papers (2022-04-11T22:46:41Z) - Learning Class-level Prototypes for Few-shot Learning [24.65076873131432]
Few-shot learning aims to recognize new categories using very few labeled samples.
We propose a framework for few-shot classification, which can learn to generate preferable prototypes from few support data.
arXiv Detail & Related papers (2021-08-25T06:33:52Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Contextualizing Enhances Gradient Based Meta Learning [7.009032627535598]
We show how to equip meta learning methods with contextualizers and show that their use can significantly boost performance on a range of few shot learning datasets.
Our approach is particularly apt for low-data environments where it is difficult to update parameters without overfitting.
arXiv Detail & Related papers (2020-07-17T04:01:56Z) - Generalized Few-Shot Video Classification with Video Retrieval and
Feature Generation [132.82884193921535]
We argue that previous methods underestimate the importance of video feature learning and propose a two-stage approach.
We show that this simple baseline approach outperforms prior few-shot video classification methods by over 20 points on existing benchmarks.
We present two novel approaches that yield further improvement.
arXiv Detail & Related papers (2020-07-09T13:05:32Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - Frustratingly Simple Few-Shot Object Detection [98.42824677627581]
We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task.
Such a simple approach outperforms the meta-learning methods by roughly 220 points on current benchmarks.
arXiv Detail & Related papers (2020-03-16T00:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.