Provably Consistent Partial-Label Learning
- URL: http://arxiv.org/abs/2007.08929v2
- Date: Fri, 23 Oct 2020 11:22:28 GMT
- Title: Provably Consistent Partial-Label Learning
- Authors: Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An,
Masashi Sugiyama
- Abstract summary: Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
In this paper, we propose the first generation model of candidate label sets, and develop two novel methods that are guaranteed to be consistent.
Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two methods.
- Score: 120.4734093544867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial-label learning (PLL) is a multi-class classification problem, where
each training example is associated with a set of candidate labels. Even though
many practical PLL methods have been proposed in the last two decades, there
lacks a theoretical understanding of the consistency of those methods-none of
the PLL methods hitherto possesses a generation process of candidate label
sets, and then it is still unclear why such a method works on a specific
dataset and when it may fail given a different dataset. In this paper, we
propose the first generation model of candidate label sets, and develop two
novel PLL methods that are guaranteed to be provably consistent, i.e., one is
risk-consistent and the other is classifier-consistent. Our methods are
advantageous, since they are compatible with any deep network or stochastic
optimizer. Furthermore, thanks to the generation model, we would be able to
answer the two questions above by testing if the generation model matches given
candidate label sets. Experiments on benchmark and real-world datasets validate
the effectiveness of the proposed generation model and two PLL methods.
Related papers
- Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Meta Objective Guided Disambiguation for Partial Label Learning [44.05801303440139]
We propose a novel framework for partial label learning with meta objective guided disambiguation (MoGD)
MoGD aims to recover the ground-truth label from candidate labels set by solving a meta objective on a small validation set.
The proposed method can be easily implemented by using various deep networks with the ordinary SGD.
arXiv Detail & Related papers (2022-08-26T06:48:01Z) - Progressive Purification for Instance-Dependent Partial Label Learning [37.65717805892473]
Partial label learning (PLL) aims to train multiclass classifiers from the examples each annotated with a set of candidate labels where a fixed but unknown candidate label is correct.
The candidate labels are always instance-dependent in practice and there is no theoretical guarantee that the model trained on the instance-dependent examples can converge to an ideal one.
In this paper, a theoretically grounded and practically effective approach named POP, i.e. PrOgressive Purification, is proposed. Specifically, POP updates the learning model and purifies each candidate label set progressively in every epoch.
arXiv Detail & Related papers (2022-06-02T02:07:12Z) - Decomposition-based Generation Process for Instance-Dependent Partial
Label Learning [45.133781119468836]
Partial label learning (PLL) is a typical weakly supervised learning problem, where each training example is associated with a set of candidate labels among which only one is true.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels and model the generation process of the candidate labels in a simple way.
We propose Maximum A Posterior(MAP) based on an explicitly modeled generation process of candidate labels.
arXiv Detail & Related papers (2022-04-08T05:18:51Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z) - Few-Shot Partial-Label Learning [25.609766770479265]
Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class by training on overly-annotated samples.
Existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner.
In this paper, we introduce an approach called FsPLL (Few-shot image learning)
arXiv Detail & Related papers (2021-06-02T07:03:54Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.