PiCO: Contrastive Label Disambiguation for Partial Label Learning
- URL: http://arxiv.org/abs/2201.08984v1
- Date: Sat, 22 Jan 2022 07:48:41 GMT
- Title: PiCO: Contrastive Label Disambiguation for Partial Label Learning
- Authors: Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen,
Junbo Zhao
- Abstract summary: Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set.
In this work, we bridge the gap by addressing two key research challenges in representation learning and label disambiguation.
Our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation.
- Score: 37.91710419258801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Partial label learning (PLL) is an important problem that allows each
training example to be labeled with a coarse candidate set, which well suits
many real-world data annotation scenarios with label ambiguity. Despite the
promise, the performance of PLL often lags behind the supervised counterpart.
In this work, we bridge the gap by addressing two key research challenges in
PLL -- representation learning and label disambiguation -- in one coherent
framework. Specifically, our proposed framework PiCO consists of a contrastive
learning module along with a novel class prototype-based label disambiguation
algorithm. PiCO produces closely aligned representations for examples from the
same classes and facilitates label disambiguation. Theoretically, we show that
these two components are mutually beneficial, and can be rigorously justified
from an expectation-maximization (EM) algorithm perspective. Extensive
experiments demonstrate that PiCO significantly outperforms the current
state-of-the-art approaches in PLL and even achieves comparable results to
fully supervised learning. Code and data available:
https://github.com/hbzju/PiCO.
Related papers
- Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Towards Effective Visual Representations for Partial-Label Learning [49.91355691337053]
Under partial-label learning (PLL), for each training instance, only a set of ambiguous labels containing the unknown true label is accessible.
Without access to true labels, positive points are predicted using pseudo-labels that are inherently noisy, and negative points often require large batches or momentum encoders.
In this paper, we rethink a state-of-the-artive contrastive method PiCO[PiPi24], which demonstrates significant scope for improvement in representation learning.
arXiv Detail & Related papers (2023-05-10T12:01:11Z) - SoLar: Sinkhorn Label Refinery for Imbalanced Partial-Label Learning [31.535219018410707]
Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.
We propose SoLar, a novel framework that allows refine the disambiguated labels towards matching the marginal class prior distribution.
SoLar exhibits substantially superior results on standardized benchmarks compared to the previous state-the-art methods.
arXiv Detail & Related papers (2022-09-21T14:00:16Z) - Few-Shot Partial-Label Learning [25.609766770479265]
Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class by training on overly-annotated samples.
Existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner.
In this paper, we introduce an approach called FsPLL (Few-shot image learning)
arXiv Detail & Related papers (2021-06-02T07:03:54Z) - Provably Consistent Partial-Label Learning [120.4734093544867]
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
In this paper, we propose the first generation model of candidate label sets, and develop two novel methods that are guaranteed to be consistent.
Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two methods.
arXiv Detail & Related papers (2020-07-17T12:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.