Progressive Purification for Instance-Dependent Partial Label Learning
- URL: http://arxiv.org/abs/2206.00830v2
- Date: Wed, 10 May 2023 02:38:00 GMT
- Title: Progressive Purification for Instance-Dependent Partial Label Learning
- Authors: Ning Xu, Biao Liu, Jiaqi Lv, Congyu Qiao, and Xin Geng
- Abstract summary: Partial label learning (PLL) aims to train multiclass classifiers from the examples each annotated with a set of candidate labels where a fixed but unknown candidate label is correct.
The candidate labels are always instance-dependent in practice and there is no theoretical guarantee that the model trained on the instance-dependent examples can converge to an ideal one.
In this paper, a theoretically grounded and practically effective approach named POP, i.e. PrOgressive Purification, is proposed. Specifically, POP updates the learning model and purifies each candidate label set progressively in every epoch.
- Score: 37.65717805892473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial label learning (PLL) aims to train multiclass classifiers from the
examples each annotated with a set of candidate labels where a fixed but
unknown candidate label is correct. In the last few years, the
instance-independent generation process of candidate labels has been
extensively studied, on the basis of which many theoretical advances have been
made in PLL. Nevertheless, the candidate labels are always instance-dependent
in practice and there is no theoretical guarantee that the model trained on the
instance-dependent PLL examples can converge to an ideal one. In this paper, a
theoretically grounded and practically effective approach named POP, i.e.
PrOgressive Purification for instance-dependent partial label learning, is
proposed. Specifically, POP updates the learning model and purifies each
candidate label set progressively in every epoch. Theoretically, we prove that
POP enlarges the region appropriately fast where the model is reliable, and
eventually approximates the Bayes optimal classifier with mild assumptions.
Technically, POP is flexible with arbitrary PLL losses and could improve the
performance of the previous PLL losses in the instance-dependent case.
Experiments on the benchmark datasets and the real-world datasets validate the
effectiveness of the proposed method.
Related papers
- Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - SoLar: Sinkhorn Label Refinery for Imbalanced Partial-Label Learning [31.535219018410707]
Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.
We propose SoLar, a novel framework that allows refine the disambiguated labels towards matching the marginal class prior distribution.
SoLar exhibits substantially superior results on standardized benchmarks compared to the previous state-the-art methods.
arXiv Detail & Related papers (2022-09-21T14:00:16Z) - Decomposition-based Generation Process for Instance-Dependent Partial
Label Learning [45.133781119468836]
Partial label learning (PLL) is a typical weakly supervised learning problem, where each training example is associated with a set of candidate labels among which only one is true.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels and model the generation process of the candidate labels in a simple way.
We propose Maximum A Posterior(MAP) based on an explicitly modeled generation process of candidate labels.
arXiv Detail & Related papers (2022-04-08T05:18:51Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Provably Consistent Partial-Label Learning [120.4734093544867]
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
In this paper, we propose the first generation model of candidate label sets, and develop two novel methods that are guaranteed to be consistent.
Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two methods.
arXiv Detail & Related papers (2020-07-17T12:19:16Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.