Towards Effective Visual Representations for Partial-Label Learning
- URL: http://arxiv.org/abs/2305.06080v1
- Date: Wed, 10 May 2023 12:01:11 GMT
- Title: Towards Effective Visual Representations for Partial-Label Learning
- Authors: Shiyu Xia, Jiaqi Lv, Ning Xu, Gang Niu, Xin Geng
- Abstract summary: Under partial-label learning (PLL), for each training instance, only a set of ambiguous labels containing the unknown true label is accessible.
Without access to true labels, positive points are predicted using pseudo-labels that are inherently noisy, and negative points often require large batches or momentum encoders.
In this paper, we rethink a state-of-the-artive contrastive method PiCO[PiPi24], which demonstrates significant scope for improvement in representation learning.
- Score: 49.91355691337053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under partial-label learning (PLL) where, for each training instance, only a
set of ambiguous candidate labels containing the unknown true label is
accessible, contrastive learning has recently boosted the performance of PLL on
vision tasks, attributed to representations learned by contrasting the
same/different classes of entities. Without access to true labels, positive
points are predicted using pseudo-labels that are inherently noisy, and
negative points often require large batches or momentum encoders, resulting in
unreliable similarity information and a high computational overhead. In this
paper, we rethink a state-of-the-art contrastive PLL method PiCO[24], inspiring
the design of a simple framework termed PaPi (Partial-label learning with a
guided Prototypical classifier), which demonstrates significant scope for
improvement in representation learning, thus contributing to label
disambiguation. PaPi guides the optimization of a prototypical classifier by a
linear classifier with which they share the same feature encoder, thus
explicitly encouraging the representation to reflect visual similarity between
categories. It is also technically appealing, as PaPi requires only a few
components in PiCO with the opposite direction of guidance, and directly
eliminates the contrastive learning module that would introduce noise and
consume computational resources. We empirically demonstrate that PaPi
significantly outperforms other PLL methods on various image classification
tasks.
Related papers
- Negative Prototypes Guided Contrastive Learning for WSOD [8.102080369924911]
Weakly Supervised Object Detection (WSOD) with only image-level annotation has recently attracted wide attention.
We propose the Negative Prototypes Guided Contrastive learning architecture.
Our proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-06-04T08:16:26Z) - Pseudo-labelling meets Label Smoothing for Noisy Partial Label Learning [8.387189407144403]
Partial label learning (PLL) is a weakly-supervised learning paradigm where each training instance is paired with a set of candidate labels (partial label)
NPLL relaxes this constraint by allowing some partial labels to not contain the true label, enhancing the practicality of the problem.
We present a minimalistic framework that initially assigns pseudo-labels to images by exploiting the noisy partial labels through a weighted nearest neighbour algorithm.
arXiv Detail & Related papers (2024-02-07T13:32:47Z) - Appeal: Allow Mislabeled Samples the Chance to be Rectified in Partial Label Learning [55.4510979153023]
In partial label learning (PLL), each instance is associated with a set of candidate labels among which only one is ground-truth.
To help these mislabeled samples "appeal," we propose the first appeal-based framework.
arXiv Detail & Related papers (2023-12-18T09:09:52Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - DualCoOp++: Fast and Effective Adaptation to Multi-Label Recognition
with Limited Annotations [79.433122872973]
Multi-label image recognition in the low-label regime is a task of great challenge and practical significance.
We leverage the powerful alignment between textual and visual features pretrained with millions of auxiliary image-text pairs.
We introduce an efficient and effective framework called Evidence-guided Dual Context Optimization (DualCoOp++)
arXiv Detail & Related papers (2023-08-03T17:33:20Z) - Semantic-Aware Dual Contrastive Learning for Multi-label Image
Classification [8.387933969327852]
We propose a novel semantic-aware dual contrastive learning framework that incorporates sample-to-sample contrastive learning.
Specifically, we leverage semantic-aware representation learning to extract category-related local discriminative features.
Our proposed method is effective and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-19T01:57:31Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - PiCO: Contrastive Label Disambiguation for Partial Label Learning [37.91710419258801]
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set.
In this work, we bridge the gap by addressing two key research challenges in representation learning and label disambiguation.
Our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation.
arXiv Detail & Related papers (2022-01-22T07:48:41Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - Few-Shot Partial-Label Learning [25.609766770479265]
Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class by training on overly-annotated samples.
Existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner.
In this paper, we introduce an approach called FsPLL (Few-shot image learning)
arXiv Detail & Related papers (2021-06-02T07:03:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.