Instance-Dependent Partial Label Learning
- URL: http://arxiv.org/abs/2110.12911v2
- Date: Tue, 26 Oct 2021 02:27:50 GMT
- Title: Instance-Dependent Partial Label Learning
- Authors: Ning Xu, Congyu Qiao, Xin Geng, Min-Ling Zhang
- Abstract summary: Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
- Score: 69.49681837908511
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial label learning (PLL) is a typical weakly supervised learning problem,
where each training example is associated with a set of candidate labels among
which only one is true. Most existing PLL approaches assume that the incorrect
labels in each training example are randomly picked as the candidate labels.
However, this assumption is not realistic since the candidate labels are always
instance-dependent. In this paper, we consider instance-dependent PLL and
assume that each example is associated with a latent label distribution
constituted by the real number of each label, representing the degree to each
label describing the feature. The incorrect label with a high degree is more
likely to be annotated as the candidate label. Therefore, the latent label
distribution is the essential labeling information in partially labeled
examples and worth being leveraged for predictive model training. Motivated by
this consideration, we propose a novel PLL method that recovers the label
distribution as a label enhancement (LE) process and trains the predictive
model iteratively in every epoch. Specifically, we assume the true posterior
density of the latent label distribution takes on the variational approximate
Dirichlet density parameterized by an inference model. Then the evidence lower
bound is deduced for optimizing the inference model and the label distributions
generated from the variational posterior are utilized for training the
predictive model. Experiments on benchmark and real-world datasets validate the
effectiveness of the proposed method. Source code is available at
https://github.com/palm-ml/valen.
Related papers
- Reduction-based Pseudo-label Generation for Instance-dependent Partial Label Learning [41.345794038968776]
We propose to leverage reduction-based pseudo-labels to alleviate the influence of incorrect candidate labels.
We show that reduction-based pseudo-labels exhibit greater consistency with the Bayes optimal classifier compared to pseudo-labels directly generated from the predictive model.
arXiv Detail & Related papers (2024-10-28T07:32:20Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Label distribution learning via label correlation grid [9.340734188957727]
We propose a textbfLabel textbfCorrelation textbfGrid (LCG) to model the uncertainty of label relationships.
Our network learns the LCG to accurately estimate the label distribution for each instance.
arXiv Detail & Related papers (2022-10-15T03:58:15Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Decomposition-based Generation Process for Instance-Dependent Partial
Label Learning [45.133781119468836]
Partial label learning (PLL) is a typical weakly supervised learning problem, where each training example is associated with a set of candidate labels among which only one is true.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels and model the generation process of the candidate labels in a simple way.
We propose Maximum A Posterior(MAP) based on an explicitly modeled generation process of candidate labels.
arXiv Detail & Related papers (2022-04-08T05:18:51Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.