Multi-class Probabilistic Bounds for Self-learning
- URL: http://arxiv.org/abs/2109.14422v1
- Date: Wed, 29 Sep 2021 13:57:37 GMT
- Title: Multi-class Probabilistic Bounds for Self-learning
- Authors: Vasilii Feofanov and Emilie Devijver and Massih-Reza Amini
- Abstract summary: Pseudo-labeling is prone to error and runs the risk of adding noisy labels into unlabeled training data.
We present a probabilistic framework for analyzing self-learning in the multi-class classification scenario with partially labeled data.
- Score: 13.875239300089861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-learning is a classical approach for learning with both labeled and
unlabeled observations which consists in giving pseudo-labels to unlabeled
training instances with a confidence score over a predetermined threshold. At
the same time, the pseudo-labeling technique is prone to error and runs the
risk of adding noisy labels into unlabeled training data. In this paper, we
present a probabilistic framework for analyzing self-learning in the
multi-class classification scenario with partially labeled data. First, we
derive a transductive bound over the risk of the multi-class majority vote
classifier. Based on this result, we propose to automatically choose the
threshold for pseudo-labeling that minimizes the transductive bound. Then, we
introduce a mislabeling error model to analyze the error of the majority vote
classifier in the case of the pseudo-labeled data. We derive a probabilistic
C-bound over the majority vote error when an imperfect label is given.
Empirical results on different data sets show the effectiveness of our
framework compared to several state-of-the-art semi-supervised approaches.
Related papers
- Reduction-based Pseudo-label Generation for Instance-dependent Partial Label Learning [41.345794038968776]
We propose to leverage reduction-based pseudo-labels to alleviate the influence of incorrect candidate labels.
We show that reduction-based pseudo-labels exhibit greater consistency with the Bayes optimal classifier compared to pseudo-labels directly generated from the predictive model.
arXiv Detail & Related papers (2024-10-28T07:32:20Z) - Multi-Label Noise Transition Matrix Estimation with Label Correlations:
Theory and Algorithm [73.94839250910977]
Noisy multi-label learning has garnered increasing attention due to the challenges posed by collecting large-scale accurate labels.
The introduction of transition matrices can help model multi-label noise and enable the development of statistically consistent algorithms.
We propose a novel estimator that leverages label correlations without the need for anchor points or precise fitting of noisy class posteriors.
arXiv Detail & Related papers (2023-09-22T08:35:38Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Seq-UPS: Sequential Uncertainty-aware Pseudo-label Selection for
Semi-Supervised Text Recognition [21.583569162994277]
One of the most popular SSL approaches is pseudo-labeling (PL)
PL methods are severely degraded by noise and are prone to over-fitting to noisy labels.
We propose a pseudo-label generation and an uncertainty-based data selection framework for text recognition.
arXiv Detail & Related papers (2022-08-31T02:21:02Z) - Learning from Multiple Unlabeled Datasets with Partial Risk
Regularization [80.54710259664698]
In this paper, we aim to learn an accurate classifier without any class labels.
We first derive an unbiased estimator of the classification risk that can be estimated from the given unlabeled sets.
We then find that the classifier obtained as such tends to cause overfitting as its empirical risks go negative during training.
Experiments demonstrate that our method effectively mitigates overfitting and outperforms state-of-the-art methods for learning from multiple unlabeled sets.
arXiv Detail & Related papers (2022-07-04T16:22:44Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.