Robust Representation Learning for Unreliable Partial Label Learning
- URL: http://arxiv.org/abs/2308.16718v1
- Date: Thu, 31 Aug 2023 13:37:28 GMT
- Title: Robust Representation Learning for Unreliable Partial Label Learning
- Authors: Yu Shi, Dong-Dong Wu, Xin Geng, Min-Ling Zhang
- Abstract summary: Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
- Score: 86.909511808373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial Label Learning (PLL) is a type of weakly supervised learning where
each training instance is assigned a set of candidate labels, but only one
label is the ground-truth. However, this idealistic assumption may not always
hold due to potential annotation inaccuracies, meaning the ground-truth may not
be present in the candidate label set. This is known as Unreliable Partial
Label Learning (UPLL) that introduces an additional complexity due to the
inherent unreliability and ambiguity of partial labels, often resulting in a
sub-optimal performance with existing methods. To address this challenge, we
propose the Unreliability-Robust Representation Learning framework (URRL) that
leverages unreliability-robust contrastive learning to help the model fortify
against unreliable partial labels effectively. Concurrently, we propose a dual
strategy that combines KNN-based candidate label set correction and
consistency-regularization-based label disambiguation to refine label quality
and enhance the ability of representation learning within the URRL framework.
Extensive experiments demonstrate that the proposed method outperforms
state-of-the-art PLL methods on various datasets with diverse degrees of
unreliability and ambiguity. Furthermore, we provide a theoretical analysis of
our approach from the perspective of the expectation maximization (EM)
algorithm. Upon acceptance, we pledge to make the code publicly accessible.
Related papers
- Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Unreliable Partial Label Learning with Recursive Separation [44.901941653899264]
Unreliable Partial Label Learning (UPLL) is proposed, in which the true label may not be in the candidate label set.
We propose a two-stage framework named Unreliable Partial Label Learning with Recursive Separation (UPLLRS)
Our method demonstrates state-of-the-art performance as evidenced by experimental results.
arXiv Detail & Related papers (2023-02-20T10:39:31Z) - Meta Objective Guided Disambiguation for Partial Label Learning [44.05801303440139]
We propose a novel framework for partial label learning with meta objective guided disambiguation (MoGD)
MoGD aims to recover the ground-truth label from candidate labels set by solving a meta objective on a small validation set.
The proposed method can be easily implemented by using various deep networks with the ordinary SGD.
arXiv Detail & Related papers (2022-08-26T06:48:01Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.