A Hybrid Architecture for Out of Domain Intent Detection and Intent
Discovery
- URL: http://arxiv.org/abs/2303.04134v2
- Date: Sun, 30 Jul 2023 16:38:23 GMT
- Title: A Hybrid Architecture for Out of Domain Intent Detection and Intent
Discovery
- Authors: Masoud Akbari, Ali Mohades, M. Hassan Shirali-Shahreza
- Abstract summary: Out of Scope (OOS) and Out of Domain (OOD) inputs may run task-oriented systems into a problem.
A labeled dataset is needed to train a model for Intent Detection in task-oriented dialogue systems.
The creation of a labeled dataset is time-consuming and needs human resources.
Our results show that the proposed model for both OOD/OOS Intent Detection and Intent Discovery achieves great results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Intent Detection is one of the tasks of the Natural Language Understanding
(NLU) unit in task-oriented dialogue systems. Out of Scope (OOS) and Out of
Domain (OOD) inputs may run these systems into a problem. On the other side, a
labeled dataset is needed to train a model for Intent Detection in
task-oriented dialogue systems. The creation of a labeled dataset is
time-consuming and needs human resources. The purpose of this article is to
address mentioned problems. The task of identifying OOD/OOS inputs is named
OOD/OOS Intent Detection. Also, discovering new intents and pseudo-labeling of
OOD inputs is well known by Intent Discovery. In OOD intent detection part, we
make use of a Variational Autoencoder to distinguish between known and unknown
intents independent of input data distribution. After that, an unsupervised
clustering method is used to discover different unknown intents underlying
OOD/OOS inputs. We also apply a non-linear dimensionality reduction on OOD/OOS
representations to make distances between representations more meaning full for
clustering. Our results show that the proposed model for both OOD/OOS Intent
Detection and Intent Discovery achieves great results and passes baselines in
English and Persian languages.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Diversity-grounded Channel Prototypical Learning for Out-of-Distribution Intent Detection [18.275098909064127]
This study presents a novel fine-tuning framework for large language models (LLMs)
We construct semantic prototypes for each ID class using a diversity-grounded prompt tuning approach.
For a thorough assessment, we benchmark our method against the prevalent fine-tuning approaches.
arXiv Detail & Related papers (2024-09-17T12:07:17Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - Negative Label Guided OOD Detection with Pretrained Vision-Language Models [96.67087734472912]
Out-of-distribution (OOD) detection aims at identifying samples from unknown classes.
We propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.
arXiv Detail & Related papers (2024-03-29T09:19:52Z) - Continual Generalized Intent Discovery: Marching Towards Dynamic and
Open-world Intent Recognition [25.811639218862958]
Generalized Intent Discovery (GID) only considers one stage of OOD learning, and needs to utilize the data in all previous stages for joint training.
Continual Generalized Intent Discovery (CGID) aims to continuously and automatically discover OOD intents from dynamic OOD data streams.
PLRD bootstraps new intent discovery through class prototypes and balances new and old intents through data replay and feature distillation.
arXiv Detail & Related papers (2023-10-16T08:48:07Z) - Out-of-Domain Intent Detection Considering Multi-Turn Dialogue Contexts [91.43701971416213]
We introduce a context-aware OOD intent detection (Caro) framework to model multi-turn contexts in OOD intent detection tasks.
Caro establishes state-of-the-art performances on multi-turn OOD detection tasks by improving the F1-OOD score of over $29%$ compared to the previous best method.
arXiv Detail & Related papers (2023-05-05T01:39:21Z) - UniNL: Aligning Representation Learning with Scoring Function for OOD
Detection via Unified Neighborhood Learning [32.69035328161356]
We propose a unified neighborhood learning framework (UniNL) to detect OOD intents.
Specifically, we design a K-nearest neighbor contrastive learning (KNCL) objective for representation learning and introduce a KNN-based scoring function for OOD detection.
arXiv Detail & Related papers (2022-10-19T17:06:34Z) - Generalized Intent Discovery: Learning from Open World Dialogue System [34.39483579171543]
Generalized Intent Discovery (GID) aims to extend an IND intent classifier to an open-world intent set including IND and OOD intents.
We construct three public datasets for different application scenarios and propose two kinds of frameworks.
arXiv Detail & Related papers (2022-09-13T14:31:53Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Discriminative Nearest Neighbor Few-Shot Intent Detection by
Transferring Natural Language Inference [150.07326223077405]
Few-shot learning is attracting much attention to mitigate data scarcity.
We present a discriminative nearest neighbor classification with deep self-attention.
We propose to boost the discriminative ability by transferring a natural language inference (NLI) model.
arXiv Detail & Related papers (2020-10-25T00:39:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.