UniNL: Aligning Representation Learning with Scoring Function for OOD
Detection via Unified Neighborhood Learning
- URL: http://arxiv.org/abs/2210.10722v1
- Date: Wed, 19 Oct 2022 17:06:34 GMT
- Title: UniNL: Aligning Representation Learning with Scoring Function for OOD
Detection via Unified Neighborhood Learning
- Authors: Yutao Mou, Pei Wang, Keqing He, Yanan Wu, Jingang Wang, Wei Wu, Weiran
Xu
- Abstract summary: We propose a unified neighborhood learning framework (UniNL) to detect OOD intents.
Specifically, we design a K-nearest neighbor contrastive learning (KNCL) objective for representation learning and introduce a KNN-based scoring function for OOD detection.
- Score: 32.69035328161356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting out-of-domain (OOD) intents from user queries is essential for
avoiding wrong operations in task-oriented dialogue systems. The key challenge
is how to distinguish in-domain (IND) and OOD intents. Previous methods ignore
the alignment between representation learning and scoring function, limiting
the OOD detection performance. In this paper, we propose a unified neighborhood
learning framework (UniNL) to detect OOD intents. Specifically, we design a
K-nearest neighbor contrastive learning (KNCL) objective for representation
learning and introduce a KNN-based scoring function for OOD detection. We aim
to align representation learning with scoring function. Experiments and
analysis on two benchmark datasets show the effectiveness of our method.
Related papers
- Diversity-grounded Channel Prototypical Learning for Out-of-Distribution Intent Detection [18.275098909064127]
This study presents a novel fine-tuning framework for large language models (LLMs)
We construct semantic prototypes for each ID class using a diversity-grounded prompt tuning approach.
For a thorough assessment, we benchmark our method against the prevalent fine-tuning approaches.
arXiv Detail & Related papers (2024-09-17T12:07:17Z) - TagOOD: A Novel Approach to Out-of-Distribution Detection via Vision-Language Representations and Class Center Learning [26.446233594630087]
We propose textbfTagOOD, a novel approach for OOD detection using vision-language representations.
TagOOD trains a lightweight network on the extracted object features to learn representative class centers.
These centers capture the central tendencies of IND object classes, minimizing the influence of irrelevant image features during OOD detection.
arXiv Detail & Related papers (2024-08-28T06:37:59Z) - Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - OOD Aware Supervised Contrastive Learning [13.329080722482187]
Out-of-Distribution (OOD) detection is a crucial problem for the safe deployment of machine learning models.
We leverage powerful representation learned with Supervised Contrastive (SupCon) training and propose a holistic approach to learn a robust to OOD data.
Our solution is simple and efficient and acts as a natural extension of the closed-set supervised contrastive representation learning.
arXiv Detail & Related papers (2023-10-03T10:38:39Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning [37.36999826208225]
We present a novel vision-language prompt learning approach for few-shot out-of-distribution (OOD) detection.
LoCoOp performs OOD regularization that utilizes the portions of CLIP local features as OOD features during training.
LoCoOp outperforms existing zero-shot and fully supervised detection methods.
arXiv Detail & Related papers (2023-06-02T06:33:08Z) - Out-of-Domain Intent Detection Considering Multi-Turn Dialogue Contexts [91.43701971416213]
We introduce a context-aware OOD intent detection (Caro) framework to model multi-turn contexts in OOD intent detection tasks.
Caro establishes state-of-the-art performances on multi-turn OOD detection tasks by improving the F1-OOD score of over $29%$ compared to the previous best method.
arXiv Detail & Related papers (2023-05-05T01:39:21Z) - A Hybrid Architecture for Out of Domain Intent Detection and Intent
Discovery [0.0]
Out of Scope (OOS) and Out of Domain (OOD) inputs may run task-oriented systems into a problem.
A labeled dataset is needed to train a model for Intent Detection in task-oriented dialogue systems.
The creation of a labeled dataset is time-consuming and needs human resources.
Our results show that the proposed model for both OOD/OOS Intent Detection and Intent Discovery achieves great results.
arXiv Detail & Related papers (2023-03-07T18:49:13Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.