CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No
- URL: http://arxiv.org/abs/2308.12213v2
- Date: Thu, 24 Aug 2023 00:48:47 GMT
- Title: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No
- Authors: Hualiang Wang, Yi Li, Huifeng Yao, Xiaomeng Li
- Abstract summary: Out-of-distribution (OOD) detection refers to training the model on an in-distribution (ID) dataset to classify whether the input images come from unknown classes.
This paper presents a novel method, namely CLIP saying no, which empowers the logic of saying no within CLIP.
- Score: 12.869519519172275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection refers to training the model on an
in-distribution (ID) dataset to classify whether the input images come from
unknown classes. Considerable effort has been invested in designing various OOD
detection methods based on either convolutional neural networks or
transformers. However, zero-shot OOD detection methods driven by CLIP, which
only require class names for ID, have received less attention. This paper
presents a novel method, namely CLIP saying no (CLIPN), which empowers the
logic of saying no within CLIP. Our key motivation is to equip CLIP with the
capability of distinguishing OOD and ID samples using positive-semantic prompts
and negation-semantic prompts. Specifically, we design a novel learnable no
prompt and a no text encoder to capture negation semantics within images.
Subsequently, we introduce two loss functions: the image-text binary-opposite
loss and the text semantic-opposite loss, which we use to teach CLIPN to
associate images with no prompts, thereby enabling it to identify unknown
samples. Furthermore, we propose two threshold-free inference algorithms to
perform OOD detection by utilizing negation semantics from no prompts and the
text encoder. Experimental results on 9 benchmark datasets (3 ID datasets and 6
OOD datasets) for the OOD detection task demonstrate that CLIPN, based on
ViT-B-16, outperforms 7 well-used algorithms by at least 2.34% and 11.64% in
terms of AUROC and FPR95 for zero-shot OOD detection on ImageNet-1K. Our CLIPN
can serve as a solid foundation for effectively leveraging CLIP in downstream
OOD tasks. The code is available on https://github.com/xmed-lab/CLIPN.
Related papers
- Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - Learning Transferable Negative Prompts for Out-of-Distribution Detection [22.983892817676495]
We introduce a novel OOD detection method, named 'NegPrompt', to learn a set of negative prompts.
It learns such negative prompts with ID data only, without any reliance on external outlier data.
Experiments on various ImageNet benchmarks show that NegPrompt surpasses state-of-the-art prompt-learning-based OOD detection methods.
arXiv Detail & Related papers (2024-04-04T07:07:34Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - Negative Label Guided OOD Detection with Pretrained Vision-Language Models [96.67087734472912]
Out-of-distribution (OOD) detection aims at identifying samples from unknown classes.
We propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.
arXiv Detail & Related papers (2024-03-29T09:19:52Z) - ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection [47.16254775587534]
We propose a novel OOD detection framework that discovers idlike outliers using CLIP citeDBLP:conf/icml/RadfordKHRGASAM21.
Benefiting from the powerful CLIP, we only need a small number of ID samples to learn the prompts of the model.
Our method achieves superior few-shot learning performance on various real-world image datasets.
arXiv Detail & Related papers (2023-11-26T09:06:40Z) - LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning [37.36999826208225]
We present a novel vision-language prompt learning approach for few-shot out-of-distribution (OOD) detection.
LoCoOp performs OOD regularization that utilizes the portions of CLIP local features as OOD features during training.
LoCoOp outperforms existing zero-shot and fully supervised detection methods.
arXiv Detail & Related papers (2023-06-02T06:33:08Z) - Out-of-Distributed Semantic Pruning for Robust Semi-Supervised Learning [17.409939628100517]
We propose a unified framework termed OOD Semantic Pruning (OSP), which aims at pruning OOD semantics out from in-distribution (ID) features.
OSP surpasses the previous state-of-the-art by 13.7% on accuracy for ID classification and 5.9% on AUROC for OOD detection on TinyImageNet dataset.
arXiv Detail & Related papers (2023-05-29T15:37:07Z) - Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is
All You Need [52.88953913542445]
We find surprisingly that simply using reconstruction-based methods could boost the performance of OOD detection significantly.
We take Masked Image Modeling as a pretext task for our OOD detection framework (MOOD)
arXiv Detail & Related papers (2023-02-06T08:24:41Z) - UniNL: Aligning Representation Learning with Scoring Function for OOD
Detection via Unified Neighborhood Learning [32.69035328161356]
We propose a unified neighborhood learning framework (UniNL) to detect OOD intents.
Specifically, we design a K-nearest neighbor contrastive learning (KNCL) objective for representation learning and introduce a KNN-based scoring function for OOD detection.
arXiv Detail & Related papers (2022-10-19T17:06:34Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.