Unreliable Partial Label Learning with Recursive Separation
- URL: http://arxiv.org/abs/2302.09891v2
- Date: Tue, 29 Aug 2023 14:10:46 GMT
- Title: Unreliable Partial Label Learning with Recursive Separation
- Authors: Yu Shi, Ning Xu, Hua Yuan and Xin Geng
- Abstract summary: Unreliable Partial Label Learning (UPLL) is proposed, in which the true label may not be in the candidate label set.
We propose a two-stage framework named Unreliable Partial Label Learning with Recursive Separation (UPLLRS)
Our method demonstrates state-of-the-art performance as evidenced by experimental results.
- Score: 44.901941653899264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial label learning (PLL) is a typical weakly supervised learning problem
in which each instance is associated with a candidate label set, and among
which only one is true. However, the assumption that the ground-truth label is
always among the candidate label set would be unrealistic, as the reliability
of the candidate label sets in real-world applications cannot be guaranteed by
annotators. Therefore, a generalized PLL named Unreliable Partial Label
Learning (UPLL) is proposed, in which the true label may not be in the
candidate label set. Due to the challenges posed by unreliable labeling,
previous PLL methods will experience a marked decline in performance when
applied to UPLL. To address the issue, we propose a two-stage framework named
Unreliable Partial Label Learning with Recursive Separation (UPLLRS). In the
first stage, the self-adaptive recursive separation strategy is proposed to
separate the training set into a reliable subset and an unreliable subset. In
the second stage, a disambiguation strategy is employed to progressively
identify the ground-truth labels in the reliable subset. Simultaneously,
semi-supervised learning methods are adopted to extract valuable information
from the unreliable subset. Our method demonstrates state-of-the-art
performance as evidenced by experimental results, particularly in situations of
high unreliability. Code and supplementary materials are available at
https://github.com/dhiyu/UPLLRS.
Related papers
- Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - Meta Objective Guided Disambiguation for Partial Label Learning [44.05801303440139]
We propose a novel framework for partial label learning with meta objective guided disambiguation (MoGD)
MoGD aims to recover the ground-truth label from candidate labels set by solving a meta objective on a small validation set.
The proposed method can be easily implemented by using various deep networks with the ordinary SGD.
arXiv Detail & Related papers (2022-08-26T06:48:01Z) - Decomposition-based Generation Process for Instance-Dependent Partial
Label Learning [45.133781119468836]
Partial label learning (PLL) is a typical weakly supervised learning problem, where each training example is associated with a set of candidate labels among which only one is true.
Most existing approaches assume that the incorrect labels in each training example are randomly picked as the candidate labels and model the generation process of the candidate labels in a simple way.
We propose Maximum A Posterior(MAP) based on an explicitly modeled generation process of candidate labels.
arXiv Detail & Related papers (2022-04-08T05:18:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.