Instance Credibility Inference for Few-Shot Learning
- URL: http://arxiv.org/abs/2003.11853v2
- Date: Fri, 3 Apr 2020 01:40:28 GMT
- Title: Instance Credibility Inference for Few-Shot Learning
- Authors: Yikai Wang, Chengming Xu, Chen Liu, Li Zhang, Yanwei Fu
- Abstract summary: Few-shot learning aims to recognize new objects with extremely limited training data for each category.
This paper presents a simple statistical approach, dubbed Instance Credibility Inference (ICI) to exploit the distribution support of unlabeled instances for few-shot learning.
Our simple approach can establish new state-of-the-arts on four widely used few-shot learning benchmark datasets.
- Score: 45.577880041135785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning (FSL) aims to recognize new objects with extremely limited
training data for each category. Previous efforts are made by either leveraging
meta-learning paradigm or novel principles in data augmentation to alleviate
this extremely data-scarce problem. In contrast, this paper presents a simple
statistical approach, dubbed Instance Credibility Inference (ICI) to exploit
the distribution support of unlabeled instances for few-shot learning.
Specifically, we first train a linear classifier with the labeled few-shot
examples and use it to infer the pseudo-labels for the unlabeled data. To
measure the credibility of each pseudo-labeled instance, we then propose to
solve another linear regression hypothesis by increasing the sparsity of the
incidental parameters and rank the pseudo-labeled instances with their sparsity
degree. We select the most trustworthy pseudo-labeled instances alongside the
labeled examples to re-train the linear classifier. This process is iterated
until all the unlabeled samples are included in the expanded training set, i.e.
the pseudo-label is converged for unlabeled data pool. Extensive experiments
under two few-shot settings show that our simple approach can establish new
state-of-the-arts on four widely used few-shot learning benchmark datasets
including miniImageNet, tieredImageNet, CIFAR-FS, and CUB. Our code is
available at: https://github.com/Yikai-Wang/ICI-FSL
Related papers
- Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation [18.57840057487926]
Learning from Label Proportions (LLP) is a learning problem where only aggregate level labels are available for groups of instances, called bags, during training.
This setting arises in domains like advertising and medicine due to privacy considerations.
We propose a novel algorithmic framework for this problem that iteratively performs two main steps.
arXiv Detail & Related papers (2023-10-12T06:09:26Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - AggMatch: Aggregating Pseudo Labels for Semi-Supervised Learning [25.27527138880104]
Semi-supervised learning has proven to be an effective paradigm for leveraging a huge amount of unlabeled data.
We introduce AggMatch, which refines initial pseudo labels by using different confident instances.
We conduct experiments to demonstrate the effectiveness of AggMatch over the latest methods on standard benchmarks.
arXiv Detail & Related papers (2022-01-25T16:41:54Z) - Improving Contrastive Learning on Imbalanced Seed Data via Open-World
Sampling [96.8742582581744]
We present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK)
MAK follows three simple principles: tailness, proximity, and diversity.
We demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features.
arXiv Detail & Related papers (2021-11-01T15:09:41Z) - Few-shot Learning via Dependency Maximization and Instance Discriminant
Analysis [21.8311401851523]
We study the few-shot learning problem, where a model learns to recognize new objects with extremely few labeled data per category.
We propose a simple approach to exploit unlabeled data accompanying the few-shot task for improving few-shot performance.
arXiv Detail & Related papers (2021-09-07T02:19:01Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z) - How to trust unlabeled data? Instance Credibility Inference for Few-Shot
Learning [47.21354101796544]
This paper presents a statistical approach, dubbed Instance Credibility Inference (ICI) to exploit the support of unlabeled instances for few-shot visual recognition.
We rank the credibility of pseudo-labeled instances along the regularization path of their corresponding incidental parameters, and the most trustworthy pseudo-labeled examples are preserved as the augmented labeled instances.
arXiv Detail & Related papers (2020-07-15T03:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.