Multi-Level Generative Models for Partial Label Learning with Non-random
Label Noise
- URL: http://arxiv.org/abs/2005.05407v1
- Date: Mon, 11 May 2020 20:13:19 GMT
- Title: Multi-Level Generative Models for Partial Label Learning with Non-random
Label Noise
- Authors: Yan Yan, Yuhong Guo
- Abstract summary: We propose a novel multi-level generative model for partial label learning (MGPLL)
It learns both a label level adversarial generator and a feature level adversarial generator under a bi-directional mapping framework.
The proposed approach demonstrates the state-of-the-art performance for partial label learning.
- Score: 47.01917619550429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial label (PL) learning tackles the problem where each training instance
is associated with a set of candidate labels that include both the true label
and irrelevant noise labels. In this paper, we propose a novel multi-level
generative model for partial label learning (MGPLL), which tackles the problem
by learning both a label level adversarial generator and a feature level
adversarial generator under a bi-directional mapping framework between the
label vectors and the data samples. Specifically, MGPLL uses a conditional
noise label generation network to model the non-random noise labels and perform
label denoising, and uses a multi-class predictor to map the training instances
to the denoised label vectors, while a conditional data feature generator is
used to form an inverse mapping from the denoised label vectors to data
samples. Both the noise label generator and the data feature generator are
learned in an adversarial manner to match the observed candidate labels and
data features respectively. Extensive experiments are conducted on synthesized
and real-world partial label datasets. The proposed approach demonstrates the
state-of-the-art performance for partial label learning.
Related papers
- Exploiting Conjugate Label Information for Multi-Instance Partial-Label Learning [61.00359941983515]
Multi-instance partial-label learning (MIPL) addresses scenarios where each training sample is represented as a multi-instance bag associated with a candidate label set containing one true label and several false positives.
ELIMIPL exploits the conjugate label information to improve the disambiguation performance.
arXiv Detail & Related papers (2024-08-26T15:49:31Z) - Inaccurate Label Distribution Learning with Dependency Noise [52.08553913094809]
We introduce the Dependent Noise-based Inaccurate Label Distribution Learning (DN-ILDL) framework to tackle the challenges posed by noise in label distribution learning.
We show that DN-ILDL effectively addresses the ILDL problem and outperforms existing LDL methods.
arXiv Detail & Related papers (2024-05-26T07:58:07Z) - Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning [8.387189407144403]
Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
arXiv Detail & Related papers (2024-02-07T13:32:47Z) - Deep Partial Multi-Label Learning with Graph Disambiguation [27.908565535292723]
We propose a novel deep Partial multi-Label model with grAph-disambIguatioN (PLAIN)
Specifically, we introduce the instance-level and label-level similarities to recover label confidences.
At each training epoch, labels are propagated on the instance and label graphs to produce relatively accurate pseudo-labels.
arXiv Detail & Related papers (2023-05-10T04:02:08Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - A Label Dependence-aware Sequence Generation Model for Multi-level
Implicit Discourse Relation Recognition [31.179555215952306]
Implicit discourse relation recognition is a challenging but crucial task in discourse analysis.
We propose a Label Dependence-aware Sequence Generation Model (LDSGM) for it.
We develop a mutual learning enhanced training method to exploit the label dependence in a bottomup direction.
arXiv Detail & Related papers (2021-12-22T09:14:03Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z) - A Realistic Simulation Framework for Learning with Label Noise [17.14439597393087]
We show that this framework generates synthetic noisy labels that exhibit important characteristics of the label noise.
We also benchmark several existing algorithms for learning with noisy labels.
We propose a new technique, Label Quality Model (LQM), that leverages annotator features to predict and correct against noisy labels.
arXiv Detail & Related papers (2021-07-23T18:53:53Z) - Rethinking Noisy Label Models: Labeler-Dependent Noise with Adversarial
Awareness [2.1930130356902207]
We propose a principled model of label noise that generalizes instance-dependent noise to multiple labelers.
Under our labeler-dependent model, label noise manifests itself under two modalities: natural error of good-faith labelers, and adversarial labels provided by malicious actors.
We present two adversarial attack vectors that more accurately reflect the label noise that may be encountered in real-world settings.
arXiv Detail & Related papers (2021-05-28T19:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.