Decoupling Pseudo Label Disambiguation and Representation Learning for
Generalized Intent Discovery
- URL: http://arxiv.org/abs/2305.17699v1
- Date: Sun, 28 May 2023 12:01:34 GMT
- Title: Decoupling Pseudo Label Disambiguation and Representation Learning for
Generalized Intent Discovery
- Authors: Yutao Mou, Xiaoshuai Song, Keqing He, Chen Zeng, Pei Wang, Jingang
Wang, Yunsen Xian and Weiran Xu
- Abstract summary: Key challenges lie in pseudo label disambiguation and representation learning.
We propose a decoupled prototype learning framework (DPL) to decouple pseudo label disambiguation and representation learning.
Experiments and analysis on three benchmark datasets show the effectiveness of our method.
- Score: 24.45800271294178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalized intent discovery aims to extend a closed-set in-domain intent
classifier to an open-world intent set including in-domain and out-of-domain
intents. The key challenges lie in pseudo label disambiguation and
representation learning. Previous methods suffer from a coupling of pseudo
label disambiguation and representation learning, that is, the reliability of
pseudo labels relies on representation learning, and representation learning is
restricted by pseudo labels in turn. In this paper, we propose a decoupled
prototype learning framework (DPL) to decouple pseudo label disambiguation and
representation learning. Specifically, we firstly introduce prototypical
contrastive representation learning (PCL) to get discriminative
representations. And then we adopt a prototype-based label disambiguation
method (PLD) to obtain pseudo labels. We theoretically prove that PCL and PLD
work in a collaborative fashion and facilitate pseudo label disambiguation.
Experiments and analysis on three benchmark datasets show the effectiveness of
our method.
Related papers
- Superpixelwise Low-rank Approximation based Partial Label Learning for Hyperspectral Image Classification [19.535446654147126]
Insufficient prior knowledge of a captured hyperspectral image (HSI) scene may lead the experts or the automatic labeling systems to offer incorrect labels or ambiguous labels.
We propose a novel superpixelwise low-rank approximation (LRA)-based partial label learning method, namely SLAP.
arXiv Detail & Related papers (2024-05-27T12:26:49Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Towards Effective Visual Representations for Partial-Label Learning [49.91355691337053]
Under partial-label learning (PLL), for each training instance, only a set of ambiguous labels containing the unknown true label is accessible.
Without access to true labels, positive points are predicted using pseudo-labels that are inherently noisy, and negative points often require large batches or momentum encoders.
In this paper, we rethink a state-of-the-artive contrastive method PiCO[PiPi24], which demonstrates significant scope for improvement in representation learning.
arXiv Detail & Related papers (2023-05-10T12:01:11Z) - Exploring Structured Semantic Prior for Multi Label Recognition with
Incomplete Labels [60.675714333081466]
Multi-label recognition (MLR) with incomplete labels is very challenging.
Recent works strive to explore the image-to-label correspondence in the vision-language model, ie, CLIP, to compensate for insufficient annotations.
We advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior.
arXiv Detail & Related papers (2023-03-23T12:39:20Z) - Disambiguation of Company names via Deep Recurrent Networks [101.90357454833845]
We propose a Siamese LSTM Network approach to extract -- via supervised learning -- an embedding of company name strings.
We analyse how an Active Learning approach to prioritise the samples to be labelled leads to a more efficient overall learning pipeline.
arXiv Detail & Related papers (2023-03-07T15:07:57Z) - Author Name Disambiguation via Heterogeneous Network Embedding from
Structural and Semantic Perspectives [13.266320447769564]
Name ambiguity is common in academic digital libraries, such as multiple authors having the same name.
The proposed method is mainly based on representation learning for heterogeneous networks and clustering.
The semantic representation is generated using NLP tools.
arXiv Detail & Related papers (2022-12-24T11:22:34Z) - PiCO: Contrastive Label Disambiguation for Partial Label Learning [37.91710419258801]
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set.
In this work, we bridge the gap by addressing two key research challenges in representation learning and label disambiguation.
Our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation.
arXiv Detail & Related papers (2022-01-22T07:48:41Z) - Debiased Contrastive Learning [64.98602526764599]
We develop a debiased contrastive objective that corrects for the sampling of same-label datapoints.
Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks.
arXiv Detail & Related papers (2020-07-01T04:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.