Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD
Detection Using Text-image Models
- URL: http://arxiv.org/abs/2305.17207v1
- Date: Fri, 26 May 2023 18:58:56 GMT
- Title: Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD
Detection Using Text-image Models
- Authors: Yunhao Ge, Jie Ren, Jiaping Zhao, Kaifeng Chen, Andrew Gallagher,
Laurent Itti, Balaji Lakshminarayanan
- Abstract summary: We propose a novel one-class open-set OOD detector that leverages text-image pre-trained models in a zero-shot fashion.
Our approach is designed to detect anything not in-domain and offers the flexibility to detect a wide variety of OOD.
Our method shows superior performance over previous methods on all benchmarks.
- Score: 23.302018871162186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We focus on the challenge of out-of-distribution (OOD) detection in deep
learning models, a crucial aspect in ensuring reliability. Despite considerable
effort, the problem remains significantly challenging in deep learning models
due to their propensity to output over-confident predictions for OOD inputs. We
propose a novel one-class open-set OOD detector that leverages text-image
pre-trained models in a zero-shot fashion and incorporates various descriptions
of in-domain and OOD. Our approach is designed to detect anything not in-domain
and offers the flexibility to detect a wide variety of OOD, defined via fine-
or coarse-grained labels, or even in natural language. We evaluate our approach
on challenging benchmarks including large-scale datasets containing
fine-grained, semantically similar classes, distributionally shifted images,
and multi-object images containing a mixture of in-domain and OOD objects. Our
method shows superior performance over previous methods on all benchmarks. Code
is available at https://github.com/gyhandy/One-Class-Anything
Related papers
- TagOOD: A Novel Approach to Out-of-Distribution Detection via Vision-Language Representations and Class Center Learning [26.446233594630087]
We propose textbfTagOOD, a novel approach for OOD detection using vision-language representations.
TagOOD trains a lightweight network on the extracted object features to learn representative class centers.
These centers capture the central tendencies of IND object classes, minimizing the influence of irrelevant image features during OOD detection.
arXiv Detail & Related papers (2024-08-28T06:37:59Z) - Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - Negative Label Guided OOD Detection with Pretrained Vision-Language Models [96.67087734472912]
Out-of-distribution (OOD) detection aims at identifying samples from unknown classes.
We propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.
arXiv Detail & Related papers (2024-03-29T09:19:52Z) - Out-of-Distribution Detection Using Peer-Class Generated by Large Language Model [0.0]
Out-of-distribution (OOD) detection is a critical task to ensure the reliability and security of machine learning models.
In this paper, a novel method called ODPC is proposed, in which specific prompts to generate OOD peer classes of ID semantics are designed by a large language model.
Experiments on five benchmark datasets show that the method we propose can yield state-of-the-art results.
arXiv Detail & Related papers (2024-03-20T06:04:05Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Exploring Large Language Models for Multi-Modal Out-of-Distribution
Detection [67.68030805755679]
Large language models (LLMs) encode a wealth of world knowledge and can be prompted to generate descriptive features for each class.
In this paper, we propose to apply world knowledge to enhance OOD detection performance through selective generation from LLMs.
arXiv Detail & Related papers (2023-10-12T04:14:28Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Improving Out-of-Distribution Detection with Disentangled Foreground and Background Features [23.266183020469065]
We propose a novel framework that disentangles foreground and background features from ID training samples via a dense prediction approach.
It is a generic framework that allows for a seamless combination with various existing OOD detection methods.
arXiv Detail & Related papers (2023-03-15T16:12:14Z) - A Simple Test-Time Method for Out-of-Distribution Detection [45.11199798139358]
This paper proposes a simple Test-time Linear Training (ETLT) method for OOD detection.
We find that the probabilities of input images being out-of-distribution are surprisingly linearly correlated to the features extracted by neural networks.
We propose an online variant of the proposed method, which achieves promising performance and is more practical in real-world applications.
arXiv Detail & Related papers (2022-07-17T16:02:58Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.