Few-Shot Partial-Label Learning
- URL: http://arxiv.org/abs/2106.00984v1
- Date: Wed, 2 Jun 2021 07:03:54 GMT
- Title: Few-Shot Partial-Label Learning
- Authors: Yunfeng Zhao, Guoxian Yu, Lei Liu, Zhongmin Yan, Lizhen Cui and
Carlotta Domeniconi
- Abstract summary: Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class by training on overly-annotated samples.
Existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner.
In this paper, we introduce an approach called FsPLL (Few-shot image learning)
- Score: 25.609766770479265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial-label learning (PLL) generally focuses on inducing a noise-tolerant
multi-class classifier by training on overly-annotated samples, each of which
is annotated with a set of labels, but only one is the valid label. A basic
promise of existing PLL solutions is that there are sufficient partial-label
(PL) samples for training. However, it is more common than not to have just few
PL samples at hand when dealing with new tasks. Furthermore, existing few-shot
learning algorithms assume precise labels of the support set; as such,
irrelevant labels may seriously mislead the meta-learner and thus lead to a
compromised performance. How to enable PLL under a few-shot learning setting is
an important problem, but not yet well studied. In this paper, we introduce an
approach called FsPLL (Few-shot PLL). FsPLL first performs adaptive distance
metric learning by an embedding network and rectifying prototypes on the tasks
previously encountered. Next, it calculates the prototype of each class of a
new task in the embedding network. An unseen example can then be classified via
its distance to each prototype. Experimental results on widely-used few-shot
datasets (Omniglot and miniImageNet) demonstrate that our FsPLL can achieve a
superior performance than the state-of-the-art methods across different
settings, and it needs fewer samples for quickly adapting to new tasks.
Related papers
- LC-Protonets: Multi-label Few-shot learning for world music audio tagging [65.72891334156706]
We introduce Label-Combination Prototypical Networks (LC-Protonets) to address the problem of multi-label few-shot classification.
LC-Protonets generate one prototype per label combination, derived from the power set of labels present in the limited training items.
Our method is applied to automatic audio tagging across diverse music datasets, covering various cultures and including both modern and traditional music.
arXiv Detail & Related papers (2024-09-17T15:13:07Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Towards Effective Visual Representations for Partial-Label Learning [49.91355691337053]
Under partial-label learning (PLL), for each training instance, only a set of ambiguous labels containing the unknown true label is accessible.
Without access to true labels, positive points are predicted using pseudo-labels that are inherently noisy, and negative points often require large batches or momentum encoders.
In this paper, we rethink a state-of-the-artive contrastive method PiCO[PiPi24], which demonstrates significant scope for improvement in representation learning.
arXiv Detail & Related papers (2023-05-10T12:01:11Z) - ALIM: Adjusting Label Importance Mechanism for Noisy Partial Label
Learning [46.53885746394252]
Noisy partial label learning is an important branch of weakly supervised learning.
Most of the existing works attempt to detect noisy samples and estimate the ground-truth label for each noisy sample.
We propose a novel framework for noisy with theoretical guarantees, called Adjusting Labeling Mechanism (ALIM)''
It aims to reduce the negative impact of detection errors by trading off the initial candidate set and model outputs.
arXiv Detail & Related papers (2023-01-28T03:42:53Z) - Multi-Instance Partial-Label Learning: Towards Exploiting Dual Inexact
Supervision [53.530957567507365]
In some real-world tasks, each training sample is associated with a candidate label set that contains one ground-truth label and some false positive labels.
In this paper, we formalize such problems as multi-instance partial-label learning (MIPL)
Existing multi-instance learning algorithms and partial-label learning algorithms are suboptimal for solving MIPL problems.
arXiv Detail & Related papers (2022-12-18T03:28:51Z) - ARNet: Automatic Refinement Network for Noisy Partial Label Learning [41.577081851679765]
We propose a novel framework called "Automatic Refinement Network (ARNet)"
Our method consists of multiple rounds. In each round, we purify the noisy samples through two key modules, i.e., noisy sample detection and label correction.
We prove that our method is able to reduce the noise level of the dataset and eventually approximate the Bayes optimal.
arXiv Detail & Related papers (2022-11-09T10:01:25Z) - PiCO: Contrastive Label Disambiguation for Partial Label Learning [37.91710419258801]
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set.
In this work, we bridge the gap by addressing two key research challenges in representation learning and label disambiguation.
Our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation.
arXiv Detail & Related papers (2022-01-22T07:48:41Z) - Provably Consistent Partial-Label Learning [120.4734093544867]
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
In this paper, we propose the first generation model of candidate label sets, and develop two novel methods that are guaranteed to be consistent.
Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two methods.
arXiv Detail & Related papers (2020-07-17T12:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.