Adversary-Aware Partial label learning with Label distillation
- URL: http://arxiv.org/abs/2304.00498v1
- Date: Sun, 2 Apr 2023 10:18:30 GMT
- Title: Adversary-Aware Partial label learning with Label distillation
- Authors: Cheng Chen, Yueming Lyu, Ivor W.Tsang
- Abstract summary: We present Ad-Aware Partial Label Learning and introduce the $textitrival$, a set of noisy labels, to the collection of candidate labels for each instance.
Our method achieves promising results on the CIFAR10, CIFAR100 and CUB200 datasets.
- Score: 47.18584755798137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To ensure that the data collected from human subjects is entrusted with a
secret, rival labels are introduced to conceal the information provided by the
participants on purpose. The corresponding learning task can be formulated as a
noisy partial-label learning problem. However, conventional partial-label
learning (PLL) methods are still vulnerable to the high ratio of noisy partial
labels, especially in a large labelling space. To learn a more robust model, we
present Adversary-Aware Partial Label Learning and introduce the
$\textit{rival}$, a set of noisy labels, to the collection of candidate labels
for each instance. By introducing the rival label, the predictive distribution
of PLL is factorised such that a handy predictive label is achieved with less
uncertainty coming from the transition matrix, assuming the rival generation
process is known. Nonetheless, the predictive accuracy is still insufficient to
produce an sufficiently accurate positive sample set to leverage the clustering
effect of the contrastive loss function. Moreover, the inclusion of rivals also
brings an inconsistency issue for the classifier and risk function due to the
intractability of the transition matrix. Consequently, an adversarial teacher
within momentum (ATM) disambiguation algorithm is proposed to cope with the
situation, allowing us to obtain a provably consistent classifier and risk
function. In addition, our method has shown high resiliency to the choice of
the label noise transition matrix. Extensive experiments demonstrate that our
method achieves promising results on the CIFAR10, CIFAR100 and CUB200 datasets.
Related papers
- Learning with Confidence: Training Better Classifiers from Soft Labels [0.0]
In supervised machine learning, models are typically trained using data with hard labels, i.e., definite assignments of class membership.
We investigate whether incorporating label uncertainty, represented as discrete probability distributions over the class labels, improves the predictive performance of classification models.
arXiv Detail & Related papers (2024-09-24T13:12:29Z) - Multi-Label Noise Transition Matrix Estimation with Label Correlations:
Theory and Algorithm [73.94839250910977]
Noisy multi-label learning has garnered increasing attention due to the challenges posed by collecting large-scale accurate labels.
The introduction of transition matrices can help model multi-label noise and enable the development of statistically consistent algorithms.
We propose a novel estimator that leverages label correlations without the need for anchor points or precise fitting of noisy class posteriors.
arXiv Detail & Related papers (2023-09-22T08:35:38Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Label Noise-Robust Learning using a Confidence-Based Sieving Strategy [15.997774467236352]
In learning tasks with label noise, improving model robustness against overfitting is a pivotal challenge.
Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge.
We propose a novel discriminator metric called confidence error and a sieving strategy called CONFES to differentiate between the clean and noisy samples effectively.
arXiv Detail & Related papers (2022-10-11T10:47:28Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Multi-class Probabilistic Bounds for Self-learning [13.875239300089861]
Pseudo-labeling is prone to error and runs the risk of adding noisy labels into unlabeled training data.
We present a probabilistic framework for analyzing self-learning in the multi-class classification scenario with partially labeled data.
arXiv Detail & Related papers (2021-09-29T13:57:37Z) - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label
Selection Framework for Semi-Supervised Learning [53.1047775185362]
Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation.
We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models.
We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process.
arXiv Detail & Related papers (2021-01-15T23:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.