UniNL: Aligning Representation Learning with Scoring Function for OOD
Detection via Unified Neighborhood Learning
- URL: http://arxiv.org/abs/2210.10722v1
- Date: Wed, 19 Oct 2022 17:06:34 GMT
- Title: UniNL: Aligning Representation Learning with Scoring Function for OOD
Detection via Unified Neighborhood Learning
- Authors: Yutao Mou, Pei Wang, Keqing He, Yanan Wu, Jingang Wang, Wei Wu, Weiran
Xu
- Abstract summary: We propose a unified neighborhood learning framework (UniNL) to detect OOD intents.
Specifically, we design a K-nearest neighbor contrastive learning (KNCL) objective for representation learning and introduce a KNN-based scoring function for OOD detection.
- Score: 32.69035328161356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting out-of-domain (OOD) intents from user queries is essential for
avoiding wrong operations in task-oriented dialogue systems. The key challenge
is how to distinguish in-domain (IND) and OOD intents. Previous methods ignore
the alignment between representation learning and scoring function, limiting
the OOD detection performance. In this paper, we propose a unified neighborhood
learning framework (UniNL) to detect OOD intents. Specifically, we design a
K-nearest neighbor contrastive learning (KNCL) objective for representation
learning and introduce a KNN-based scoring function for OOD detection. We aim
to align representation learning with scoring function. Experiments and
analysis on two benchmark datasets show the effectiveness of our method.
Related papers
- Open-World Lifelong Graph Learning [7.535219325248997]
We study the problem of lifelong graph learning in an open-world scenario.
We utilize Out-of-Distribution (OOD) detection methods to recognize new classes.
We suggest performing new class detection by combining OOD detection methods with information aggregated from the graph neighborhood.
arXiv Detail & Related papers (2023-10-19T08:18:10Z) - OOD Aware Supervised Contrastive Learning [13.329080722482187]
Out-of-Distribution (OOD) detection is a crucial problem for the safe deployment of machine learning models.
We leverage powerful representation learned with Supervised Contrastive (SupCon) training and propose a holistic approach to learn a robust to OOD data.
Our solution is simple and efficient and acts as a natural extension of the closed-set supervised contrastive representation learning.
arXiv Detail & Related papers (2023-10-03T10:38:39Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - How Does Fine-Tuning Impact Out-of-Distribution Detection for
Vision-Language Models? [35.15232426182503]
We study how fine-tuning impact OOD detection for few-shot downstream tasks.
Our results suggest that a proper choice of OOD scores is essential for CLIP-based fine-tuning.
We also show that prompt learning demonstrates the state-of-the-art OOD detection performance over the zero-shot counterpart.
arXiv Detail & Related papers (2023-06-09T17:16:50Z) - LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning [37.36999826208225]
We present a novel vision-language prompt learning approach for few-shot out-of-distribution (OOD) detection.
LoCoOp performs OOD regularization that utilizes the portions of CLIP local features as OOD features during training.
LoCoOp outperforms existing zero-shot and fully supervised detection methods.
arXiv Detail & Related papers (2023-06-02T06:33:08Z) - Out-of-Domain Intent Detection Considering Multi-Turn Dialogue Contexts [91.43701971416213]
We introduce a context-aware OOD intent detection (Caro) framework to model multi-turn contexts in OOD intent detection tasks.
Caro establishes state-of-the-art performances on multi-turn OOD detection tasks by improving the F1-OOD score of over $29%$ compared to the previous best method.
arXiv Detail & Related papers (2023-05-05T01:39:21Z) - Background Matters: Enhancing Out-of-distribution Detection with Domain
Features [90.32910087103744]
OOD samples can be drawn from arbitrary distributions and exhibit deviations from in-distribution (ID) data in various dimensions.
Existing methods focus on detecting OOD samples based on the semantic features, while neglecting the other dimensions such as the domain features.
This paper proposes a novel generic framework that can learn the domain features from the ID training samples by a dense prediction approach.
arXiv Detail & Related papers (2023-03-15T16:12:14Z) - A Hybrid Architecture for Out of Domain Intent Detection and Intent
Discovery [0.0]
Out of Scope (OOS) and Out of Domain (OOD) inputs may run task-oriented systems into a problem.
A labeled dataset is needed to train a model for Intent Detection in task-oriented dialogue systems.
The creation of a labeled dataset is time-consuming and needs human resources.
Our results show that the proposed model for both OOD/OOS Intent Detection and Intent Discovery achieves great results.
arXiv Detail & Related papers (2023-03-07T18:49:13Z) - Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD
Training Data Estimate a Combination of the Same Core Quantities [104.02531442035483]
The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods.
We show that binary discrimination between in- and (different) out-distributions is equivalent to several distinct formulations of the OOD detection problem.
We also show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function.
arXiv Detail & Related papers (2022-06-20T16:32:49Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.