Negative Label Guided OOD Detection with Pretrained Vision-Language Models
- URL: http://arxiv.org/abs/2403.20078v1
- Date: Fri, 29 Mar 2024 09:19:52 GMT
- Title: Negative Label Guided OOD Detection with Pretrained Vision-Language Models
- Authors: Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han,
- Abstract summary: Out-of-distribution (OOD) detection aims at identifying samples from unknown classes.
We propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.
- Score: 96.67087734472912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection aims at identifying samples from unknown classes, playing a crucial role in trustworthy models against errors on unexpected inputs. Extensive research has been dedicated to exploring OOD detection in the vision modality. Vision-language models (VLMs) can leverage both textual and visual information for various multi-modal applications, whereas few OOD detection methods take into account information from the text modality. In this paper, we propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases. We design a novel scheme for the OOD score collaborated with negative labels. Theoretical analysis helps to understand the mechanism of negative labels. Extensive experiments demonstrate that our method NegLabel achieves state-of-the-art performance on various OOD detection benchmarks and generalizes well on multiple VLM architectures. Furthermore, our method NegLabel exhibits remarkable robustness against diverse domain shifts. The codes are available at https://github.com/tmlr-group/NegLabel.
Related papers
- TagOOD: A Novel Approach to Out-of-Distribution Detection via Vision-Language Representations and Class Center Learning [26.446233594630087]
We propose textbfTagOOD, a novel approach for OOD detection using vision-language representations.
TagOOD trains a lightweight network on the extracted object features to learn representative class centers.
These centers capture the central tendencies of IND object classes, minimizing the influence of irrelevant image features during OOD detection.
arXiv Detail & Related papers (2024-08-28T06:37:59Z) - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - Learning Transferable Negative Prompts for Out-of-Distribution Detection [22.983892817676495]
We introduce a novel OOD detection method, named 'NegPrompt', to learn a set of negative prompts.
It learns such negative prompts with ID data only, without any reliance on external outlier data.
Experiments on various ImageNet benchmarks show that NegPrompt surpasses state-of-the-art prompt-learning-based OOD detection methods.
arXiv Detail & Related papers (2024-04-04T07:07:34Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - APP: Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection [40.846633965439956]
This paper focuses on a few-shot OOD setting where there are only a few labeled IND data and massive unlabeled mixed data.
We propose an adaptive pseudo-labeling (APP) method for few-shot OOD detection.
arXiv Detail & Related papers (2023-10-20T09:48:52Z) - Exploring Large Language Models for Multi-Modal Out-of-Distribution
Detection [67.68030805755679]
Large language models (LLMs) encode a wealth of world knowledge and can be prompted to generate descriptive features for each class.
In this paper, we propose to apply world knowledge to enhance OOD detection performance through selective generation from LLMs.
arXiv Detail & Related papers (2023-10-12T04:14:28Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD
Detection Using Text-image Models [23.302018871162186]
We propose a novel one-class open-set OOD detector that leverages text-image pre-trained models in a zero-shot fashion.
Our approach is designed to detect anything not in-domain and offers the flexibility to detect a wide variety of OOD.
Our method shows superior performance over previous methods on all benchmarks.
arXiv Detail & Related papers (2023-05-26T18:58:56Z) - Estimating Soft Labels for Out-of-Domain Intent Detection [122.68266151023676]
Out-of-Domain (OOD) intent detection is important for practical dialog systems.
We propose an adaptive soft pseudo labeling (ASoul) method that can estimate soft labels for pseudo OOD samples.
arXiv Detail & Related papers (2022-11-10T13:31:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.