Adaptive Multi-prompt Contrastive Network for Few-shot Out-of-distribution Detection
- URL: http://arxiv.org/abs/2506.17633v1
- Date: Sat, 21 Jun 2025 08:31:29 GMT
- Title: Adaptive Multi-prompt Contrastive Network for Few-shot Out-of-distribution Detection
- Authors: Xiang Fang, Arvind Easwaran, Blaise Genest,
- Abstract summary: Out-of-distribution (OOD) detection attempts to distinguish outlier samples to prevent models trained on the in-distribution (ID) dataset from producing unavailable outputs.<n>Most OOD detection methods require many IID samples for training, which seriously limits their real-world applications.<n>We propose a novel network: Adaptive Multi-prompt Contrastive Network (AMCN), which adapts the ID-OOD separation boundary by learning inter- and intra-class distribution.
- Score: 4.938957922033169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection attempts to distinguish outlier samples to prevent models trained on the in-distribution (ID) dataset from producing unavailable outputs. Most OOD detection methods require many IID samples for training, which seriously limits their real-world applications. To this end, we target a challenging setting: few-shot OOD detection, where {Only a few {\em labeled ID} samples are available.} Therefore, few-shot OOD detection is much more challenging than the traditional OOD detection setting. Previous few-shot OOD detection works ignore the distinct diversity between different classes. In this paper, we propose a novel network: Adaptive Multi-prompt Contrastive Network (AMCN), which adapts the ID-OOD separation boundary by learning inter- and intra-class distribution. To compensate for the absence of OOD and scarcity of ID {\em image samples}, we leverage CLIP, connecting text with images, engineering learnable ID and OOD {\em textual prompts}. Specifically, we first generate adaptive prompts (learnable ID prompts, label-fixed OOD prompts and label-adaptive OOD prompts). Then, we generate an adaptive class boundary for each class by introducing a class-wise threshold. Finally, we propose a prompt-guided ID-OOD separation module to control the margin between ID and OOD prompts. Experimental results show that AMCN outperforms other state-of-the-art works.
Related papers
- EOOD: Entropy-based Out-of-distribution Detection [9.546208844692035]
Deep neural networks (DNNs) often exhibit overconfidence when encountering out-of-distribution (OOD) samples.<n>We propose an Entropy-based Out-Of-distribution Detection framework.
arXiv Detail & Related papers (2025-04-04T10:57:03Z) - Semantic or Covariate? A Study on the Intractable Case of Out-of-Distribution Detection [70.57120710151105]
We provide a more precise definition of the Semantic Space for the ID distribution.
We also define the "Tractable OOD" setting which ensures the distinguishability of OOD and ID distributions.
arXiv Detail & Related papers (2024-11-18T03:09:39Z) - LAPT: Label-driven Automated Prompt Tuning for OOD Detection with Vision-Language Models [17.15755066370757]
Label-driven Automated Prompt Tuning (LAPT) is a novel approach to OOD detection that reduces the need for manual prompt engineering.
We develop distribution-aware prompts with in-distribution (ID) class names and negative labels mined automatically.
LAPT consistently outperforms manually crafted prompts, setting a new standard for OOD detection.
arXiv Detail & Related papers (2024-07-12T03:30:53Z) - Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - Negative Label Guided OOD Detection with Pretrained Vision-Language Models [96.67087734472912]
Out-of-distribution (OOD) detection aims at identifying samples from unknown classes.
We propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.
arXiv Detail & Related papers (2024-03-29T09:19:52Z) - Out-of-Distribution Detection Using Peer-Class Generated by Large Language Model [0.0]
Out-of-distribution (OOD) detection is a critical task to ensure the reliability and security of machine learning models.
In this paper, a novel method called ODPC is proposed, in which specific prompts to generate OOD peer classes of ID semantics are designed by a large language model.
Experiments on five benchmark datasets show that the method we propose can yield state-of-the-art results.
arXiv Detail & Related papers (2024-03-20T06:04:05Z) - ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection [47.16254775587534]
We propose a novel OOD detection framework that discovers idlike outliers using CLIP citeDBLP:conf/icml/RadfordKHRGASAM21.
Benefiting from the powerful CLIP, we only need a small number of ID samples to learn the prompts of the model.
Our method achieves superior few-shot learning performance on various real-world image datasets.
arXiv Detail & Related papers (2023-11-26T09:06:40Z) - Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is critical for identifying test samples that deviate from in-distribution (ID) data, ensuring network robustness and reliability.<n>This paper presents a flexible framework for OOD knowledge distillation that extracts OOD-sensitive information from a network to develop a binary classifier capable of distinguishing between ID and OOD samples.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - APP: Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection [40.846633965439956]
This paper focuses on a few-shot OOD setting where there are only a few labeled IND data and massive unlabeled mixed data.
We propose an adaptive pseudo-labeling (APP) method for few-shot OOD detection.
arXiv Detail & Related papers (2023-10-20T09:48:52Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - AUTO: Adaptive Outlier Optimization for Test-Time OOD Detection [79.51071170042972]
Out-of-distribution (OOD) detection aims to detect test samples that do not fall into any training in-distribution (ID) classes.<n>Data safety and privacy make it infeasible to collect task-specific outliers in advance for different scenarios.<n>We present test-time OOD detection, which allows the deployed model to utilize real OOD data from the unlabeled data stream during testing.
arXiv Detail & Related papers (2023-03-22T02:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.