Integration of Old and New Knowledge for Generalized Intent Discovery: A Consistency-driven Prototype-Prompting Framework
- URL: http://arxiv.org/abs/2506.08490v1
- Date: Tue, 10 Jun 2025 06:30:17 GMT
- Title: Integration of Old and New Knowledge for Generalized Intent Discovery: A Consistency-driven Prototype-Prompting Framework
- Authors: Xiao Wei, Xiaobao Wang, Ning Zhuang, Chenyang Wang, Longbiao Wang, Jianwu dang,
- Abstract summary: Generalized Intent Discovery (GID) addresses this by leveraging unlabeled OOD data to discover new intents without additional annotation.<n>We propose a consistency-driven prototype-prompting framework for GID from the perspective of integrating old and new knowledge.<n>Our method significantly outperforms all baseline methods, achieving state-of-the-art results.
- Score: 49.60947755616314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intent detection aims to identify user intents from natural language inputs, where supervised methods rely heavily on labeled in-domain (IND) data and struggle with out-of-domain (OOD) intents, limiting their practical applicability. Generalized Intent Discovery (GID) addresses this by leveraging unlabeled OOD data to discover new intents without additional annotation. However, existing methods focus solely on clustering unsupervised data while neglecting domain adaptation. Therefore, we propose a consistency-driven prototype-prompting framework for GID from the perspective of integrating old and new knowledge, which includes a prototype-prompting framework for transferring old knowledge from external sources, and a hierarchical consistency constraint for learning new knowledge from target domains. We conducted extensive experiments and the results show that our method significantly outperforms all baseline methods, achieving state-of-the-art results, which strongly demonstrates the effectiveness and generalization of our methods. Our source code is publicly available at https://github.com/smileix/cpp.
Related papers
- Towards Open-world Generalized Deepfake Detection: General Feature Extraction via Unsupervised Domain Adaptation [15.737902253508235]
Social platforms are flooded with vast amounts of unlabeled synthetic data and authentic data.<n>In open world scenarios, the amount of unlabeled data greatly exceeds that of labeled data.<n>We propose a novel Open-World Deepfake Detection Generalization Enhancement Training Strategy (OWG-DS) to improve the generalization ability of existing methods.
arXiv Detail & Related papers (2025-05-18T10:12:12Z) - Pseudo-Label Enhanced Prototypical Contrastive Learning for Uniformed Intent Discovery [27.18799732585361]
We propose a Pseudo-Label enhanced Prototypical Contrastive Learning (PLPCL) model for uniformed intent discovery.
We iteratively utilize pseudo-labels to explore potential positive/negative samples for contrastive learning and bridge the gap between representation and clustering.
Our method has been proven effective in two different settings of discovering new intents.
arXiv Detail & Related papers (2024-10-26T16:22:45Z) - Continual Generalized Intent Discovery: Marching Towards Dynamic and
Open-world Intent Recognition [25.811639218862958]
Generalized Intent Discovery (GID) only considers one stage of OOD learning, and needs to utilize the data in all previous stages for joint training.
Continual Generalized Intent Discovery (CGID) aims to continuously and automatically discover OOD intents from dynamic OOD data streams.
PLRD bootstraps new intent discovery through class prototypes and balances new and old intents through data replay and feature distillation.
arXiv Detail & Related papers (2023-10-16T08:48:07Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - CLIP the Gap: A Single Domain Generalization Approach for Object
Detection [60.20931827772482]
Single Domain Generalization tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain.
We propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts.
We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss.
arXiv Detail & Related papers (2023-01-13T12:01:18Z) - Generalized Intent Discovery: Learning from Open World Dialogue System [34.39483579171543]
Generalized Intent Discovery (GID) aims to extend an IND intent classifier to an open-world intent set including IND and OOD intents.
We construct three public datasets for different application scenarios and propose two kinds of frameworks.
arXiv Detail & Related papers (2022-09-13T14:31:53Z) - Towards Textual Out-of-Domain Detection without In-Domain Labels [41.23096594140221]
This work focuses on a challenging case of OOD detection, where no labels for in-domain data are accessible.
We first evaluate different language model based approaches that predict likelihood for a sequence of tokens.
We propose a novel representation learning based method by combining unsupervised clustering and contrastive learning.
arXiv Detail & Related papers (2022-03-22T00:11:46Z) - Enhancing the Generalization for Intent Classification and Out-of-Domain
Detection in SLU [70.44344060176952]
Intent classification is a major task in spoken language understanding (SLU)
Recent works have shown that using extra data and labels can improve the OOD detection performance.
This paper proposes to train a model with only IND data while supporting both IND intent classification and OOD detection.
arXiv Detail & Related papers (2021-06-28T08:27:38Z) - Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive
Object Re-ID [55.21702895051287]
Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain.
We propose a novel self-paced contrastive learning framework with hybrid memory.
Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID.
arXiv Detail & Related papers (2020-06-04T09:12:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.