LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
- URL: http://arxiv.org/abs/2306.01293v3
- Date: Wed, 25 Oct 2023 04:22:02 GMT
- Title: LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning
- Authors: Atsuyuki Miyai, Qing Yu, Go Irie, Kiyoharu Aizawa
- Abstract summary: We present a novel vision-language prompt learning approach for few-shot out-of-distribution (OOD) detection.
LoCoOp performs OOD regularization that utilizes the portions of CLIP local features as OOD features during training.
LoCoOp outperforms existing zero-shot and fully supervised detection methods.
- Score: 37.36999826208225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel vision-language prompt learning approach for few-shot
out-of-distribution (OOD) detection. Few-shot OOD detection aims to detect OOD
images from classes that are unseen during training using only a few labeled
in-distribution (ID) images. While prompt learning methods such as CoOp have
shown effectiveness and efficiency in few-shot ID classification, they still
face limitations in OOD detection due to the potential presence of
ID-irrelevant information in text embeddings. To address this issue, we
introduce a new approach called Local regularized Context Optimization
(LoCoOp), which performs OOD regularization that utilizes the portions of CLIP
local features as OOD features during training. CLIP's local features have a
lot of ID-irrelevant nuisances (e.g., backgrounds), and by learning to push
them away from the ID class text embeddings, we can remove the nuisances in the
ID class text embeddings and enhance the separation between ID and OOD.
Experiments on the large-scale ImageNet OOD detection benchmarks demonstrate
the superiority of our LoCoOp over zero-shot, fully supervised detection
methods and prompt learning methods. Notably, even in a one-shot setting --
just one label per class, LoCoOp outperforms existing zero-shot and fully
supervised detection methods. The code will be available via
https://github.com/AtsuMiyai/LoCoOp.
Related papers
- Envisioning Outlier Exposure by Large Language Models for Out-of-Distribution Detection [71.93411099797308]
Out-of-distribution (OOD) samples are crucial when deploying machine learning models in open-world scenarios.
We propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to potential Outlier Exposure, termed EOE.
EOE can be generalized to different tasks, including far, near, and fine-language OOD detection.
EOE achieves state-of-the-art performance across different OOD tasks and can be effectively scaled to the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-06-02T17:09:48Z) - Learning Transferable Negative Prompts for Out-of-Distribution Detection [22.983892817676495]
We introduce a novel OOD detection method, named 'NegPrompt', to learn a set of negative prompts.
It learns such negative prompts with ID data only, without any reliance on external outlier data.
Experiments on various ImageNet benchmarks show that NegPrompt surpasses state-of-the-art prompt-learning-based OOD detection methods.
arXiv Detail & Related papers (2024-04-04T07:07:34Z) - A noisy elephant in the room: Is your out-of-distribution detector robust to label noise? [49.88894124047644]
We take a closer look at 20 state-of-the-art OOD detection methods.
We show that poor separation between incorrectly classified ID samples vs. OOD samples is an overlooked yet important limitation of existing methods.
arXiv Detail & Related papers (2024-04-02T09:40:22Z) - CLIP-driven Outliers Synthesis for few-shot OOD detection [40.6496321698913]
Few-shot OOD detection focuses on recognizing out-of-distribution (OOD) images that belong to classes unseen during training.
Up to now, a mainstream strategy is based on large-scale vision-language models, such as CLIP.
We propose CLIP-driven Outliers Synthesis(CLIP-OS) to overcome the lack of reliable OOD supervision information.
arXiv Detail & Related papers (2024-03-30T11:28:05Z) - Exploring Large Language Models for Multi-Modal Out-of-Distribution
Detection [67.68030805755679]
Large language models (LLMs) encode a wealth of world knowledge and can be prompted to generate descriptive features for each class.
In this paper, we propose to apply world knowledge to enhance OOD detection performance through selective generation from LLMs.
arXiv Detail & Related papers (2023-10-12T04:14:28Z) - CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No [12.869519519172275]
Out-of-distribution (OOD) detection refers to training the model on an in-distribution (ID) dataset to classify whether the input images come from unknown classes.
This paper presents a novel method, namely CLIP saying no, which empowers the logic of saying no within CLIP.
arXiv Detail & Related papers (2023-08-23T15:51:36Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Generalized Open-World Semi-Supervised Object Detection [22.058195650206944]
We introduce an ensemble-based OOD Explorer for detection and classification, and an adaptable semi-supervised object detection framework.
We demonstrate that our method performs competitively against state-of-the-art OOD detection algorithms and also significantly boosts the semi-supervised learning performance for both ID and OOD classes.
arXiv Detail & Related papers (2023-07-28T17:59:03Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.