APP: Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection
- URL: http://arxiv.org/abs/2310.13380v1
- Date: Fri, 20 Oct 2023 09:48:52 GMT
- Title: APP: Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection
- Authors: Pei Wang, Keqing He, Yutao Mou, Xiaoshuai Song, Yanan Wu, Jingang
Wang, Yunsen Xian, Xunliang Cai, Weiran Xu
- Abstract summary: This paper focuses on a few-shot OOD setting where there are only a few labeled IND data and massive unlabeled mixed data.
We propose an adaptive pseudo-labeling (APP) method for few-shot OOD detection.
- Score: 40.846633965439956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting out-of-domain (OOD) intents from user queries is essential for a
task-oriented dialogue system. Previous OOD detection studies generally work on
the assumption that plenty of labeled IND intents exist. In this paper, we
focus on a more practical few-shot OOD setting where there are only a few
labeled IND data and massive unlabeled mixed data that may belong to IND or
OOD. The new scenario carries two key challenges: learning discriminative
representations using limited IND data and leveraging unlabeled mixed data.
Therefore, we propose an adaptive prototypical pseudo-labeling (APP) method for
few-shot OOD detection, including a prototypical OOD detection framework
(ProtoOOD) to facilitate low-resource OOD detection using limited IND data, and
an adaptive pseudo-labeling method to produce high-quality pseudo OOD\&IND
labels. Extensive experiments and analysis demonstrate the effectiveness of our
method for few-shot OOD detection.
Related papers
- Negative Label Guided OOD Detection with Pretrained Vision-Language Models [96.67087734472912]
Out-of-distribution (OOD) detection aims at identifying samples from unknown classes.
We propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.
arXiv Detail & Related papers (2024-03-29T09:19:52Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation [43.865923770543205]
Out-of-distribution (OOD) detection is the problem of identifying inputs unrelated to the in-distribution task.
Most of the currently used test OOD datasets, including datasets from the open set recognition (OSR) literature, have severe issues.
We introduce with NINCO a novel test OOD dataset, each sample checked to be ID free, which allows for a detailed analysis of an OOD detector's strengths and failure modes.
arXiv Detail & Related papers (2023-06-01T15:48:10Z) - Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective [55.45202687256175]
Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD.
In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection.
We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance.
arXiv Detail & Related papers (2023-02-16T13:34:35Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - Estimating Soft Labels for Out-of-Domain Intent Detection [122.68266151023676]
Out-of-Domain (OOD) intent detection is important for practical dialog systems.
We propose an adaptive soft pseudo labeling (ASoul) method that can estimate soft labels for pseudo OOD samples.
arXiv Detail & Related papers (2022-11-10T13:31:13Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Exploiting Mixed Unlabeled Data for Detecting Samples of Seen and Unseen
Out-of-Distribution Classes [5.623232537411766]
Out-of-Distribution (OOD) detection is essential in real-world applications, which has attracted increasing attention in recent years.
Most existing OOD detection methods require many labeled In-Distribution (ID) data, causing a heavy labeling cost.
In this paper, we focus on the more realistic scenario, where limited labeled data and abundant unlabeled data are available.
We propose the Adaptive In-Out-aware Learning (AIOL) method, in which we adaptively select potential ID and OOD samples from the mixed unlabeled data.
arXiv Detail & Related papers (2022-10-13T08:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.