Complementary Classifier Induced Partial Label Learning
- URL: http://arxiv.org/abs/2305.09897v1
- Date: Wed, 17 May 2023 02:13:23 GMT
- Title: Complementary Classifier Induced Partial Label Learning
- Authors: Yuheng Jia, Chongjie Si, Min-ling Zhang
- Abstract summary: In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
- Score: 54.61668156386079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In partial label learning (PLL), each training sample is associated with a
set of candidate labels, among which only one is valid. The core of PLL is to
disambiguate the candidate labels to get the ground-truth one. In
disambiguation, the existing works usually do not fully investigate the
effectiveness of the non-candidate label set (a.k.a. complementary labels),
which accurately indicates a set of labels that do not belong to a sample. In
this paper, we use the non-candidate labels to induce a complementary
classifier, which naturally forms an adversarial relationship against the
traditional PLL classifier, to eliminate the false-positive labels in the
candidate label set. Besides, we assume the feature space and the label space
share the same local topological structure captured by a dynamic graph, and use
it to assist disambiguation. Extensive experimental results validate the
superiority of the proposed approach against state-of-the-art PLL methods on 4
controlled UCI data sets and 6 real-world data sets, and reveal the usefulness
of complementary learning in PLL. The code has been released in the link
https://github.com/Chongjie-Si/PL-CL.
Related papers
- Exploiting Conjugate Label Information for Multi-Instance Partial-Label Learning [61.00359941983515]
Multi-instance partial-label learning (MIPL) addresses scenarios where each training sample is represented as a multi-instance bag associated with a candidate label set containing one true label and several false positives.
ELIMIPL exploits the conjugate label information to improve the disambiguation performance.
arXiv Detail & Related papers (2024-08-26T15:49:31Z) - Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning [8.387189407144403]
Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
arXiv Detail & Related papers (2024-02-07T13:32:47Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Complementary to Multiple Labels: A Correlation-Aware Correction
Approach [65.59584909436259]
We show theoretically how the estimated transition matrix in multi-class CLL could be distorted in multi-labeled cases.
We propose a two-step method to estimate the transition matrix from candidate labels.
arXiv Detail & Related papers (2023-02-25T04:48:48Z) - Unreliable Partial Label Learning with Recursive Separation [44.901941653899264]
Unreliable Partial Label Learning (UPLL) is proposed, in which the true label may not be in the candidate label set.
We propose a two-stage framework named Unreliable Partial Label Learning with Recursive Separation (UPLLRS)
Our method demonstrates state-of-the-art performance as evidenced by experimental results.
arXiv Detail & Related papers (2023-02-20T10:39:31Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Decomposition-based Generation Process for Instance-Dependent Partial
Label Learning [45.133781119468836]
Partial label learning (PLL) is a typical weakly supervised learning problem, where each training example is associated with a set of candidate labels among which only one is true.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels and model the generation process of the candidate labels in a simple way.
We propose Maximum A Posterior(MAP) based on an explicitly modeled generation process of candidate labels.
arXiv Detail & Related papers (2022-04-08T05:18:51Z) - Instance-Dependent Partial Label Learning [69.49681837908511]
Partial label learning is a typical weakly supervised learning problem.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels.
In this paper, we consider instance-dependent and assume that each example is associated with a latent label distribution constituted by the real number of each label.
arXiv Detail & Related papers (2021-10-25T12:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.