PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection
- URL: http://arxiv.org/abs/2404.05231v2
- Date: Tue, 16 Jul 2024 08:02:46 GMT
- Title: PromptAD: Learning Prompts with only Normal Samples for Few-Shot Anomaly Detection
- Authors: Xiaofan Li, Zhizhong Zhang, Xin Tan, Chengwei Chen, Yanyun Qu, Yuan Xie, Lizhuang Ma,
- Abstract summary: This paper proposes a one-class prompt learning method for few-shot anomaly detection, termed PromptAD.
For image-level/pixel-level anomaly detection, PromptAD achieves first place in 11/12 few-shot settings on MVTec and VisA.
- Score: 59.34973469354926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vision-language model has brought great improvement to few-shot industrial anomaly detection, which usually needs to design of hundreds of prompts through prompt engineering. For automated scenarios, we first use conventional prompt learning with many-class paradigm as the baseline to automatically learn prompts but found that it can not work well in one-class anomaly detection. To address the above problem, this paper proposes a one-class prompt learning method for few-shot anomaly detection, termed PromptAD. First, we propose semantic concatenation which can transpose normal prompts into anomaly prompts by concatenating normal prompts with anomaly suffixes, thus constructing a large number of negative samples used to guide prompt learning in one-class setting. Furthermore, to mitigate the training challenge caused by the absence of anomaly images, we introduce the concept of explicit anomaly margin, which is used to explicitly control the margin between normal prompt features and anomaly prompt features through a hyper-parameter. For image-level/pixel-level anomaly detection, PromptAD achieves first place in 11/12 few-shot settings on MVTec and VisA.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - AnoPLe: Few-Shot Anomaly Detection via Bi-directional Prompt Learning with Only Normal Samples [6.260747047974035]
AnoPLe is a multi-modal prompt learning method designed for anomaly detection without prior knowledge of anomalies.
The experimental results demonstrate that AnoPLe achieves strong FAD performance, recording 94.1% and 86.2% Image AUROC on MVTec-AD and VisA respectively.
arXiv Detail & Related papers (2024-08-24T08:41:19Z) - Human-Free Automated Prompting for Vision-Language Anomaly Detection: Prompt Optimization with Meta-guiding Prompt Scheme [19.732769780675977]
Pre-trained vision-language models (VLMs) are highly adaptable to various downstream tasks through few-shot learning.
Traditional methods depend on human-crafted prompts that require prior knowledge of specific anomaly types.
Our goal is to develop a human-free prompt-based anomaly detection framework that optimally learns prompts through data-driven methods.
arXiv Detail & Related papers (2024-06-26T09:29:05Z) - Prior Normality Prompt Transformer for Multi-class Industrial Image Anomaly Detection [6.865429486202104]
We introduce Prior Normality Prompt Transformer (PNPT) for multi-class anomaly detection.
PNPT strategically incorporates normal semantics prompting to mitigate the "identical mapping" problem.
This entails integrating a prior normality prompt into the reconstruction process, yielding a dual-stream model.
arXiv Detail & Related papers (2024-06-17T13:10:04Z) - Random Word Data Augmentation with CLIP for Zero-Shot Anomaly Detection [3.75292409381511]
This paper presents a novel method that leverages a visual-language model, CLIP, as a data source for zero-shot anomaly detection.
Using the generated embeddings as training data, a feed-forward neural network learns to extract features of normal and anomaly from CLIP's embeddings.
Experimental results demonstrate that our method achieves state-of-the-art performance without laborious prompt ensembling in zero-shot setups.
arXiv Detail & Related papers (2023-08-22T01:55:03Z) - Self-regulating Prompts: Foundational Model Adaptation without
Forgetting [112.66832145320434]
We introduce a self-regularization framework for prompting called PromptSRC.
PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations.
arXiv Detail & Related papers (2023-07-13T17:59:35Z) - Bayesian Prompt Learning for Image-Language Model Generalization [64.50204877434878]
We use the regularization ability of Bayesian methods to frame prompt learning as a variational inference problem.
Our approach regularizes the prompt space, reduces overfitting to the seen prompts and improves the prompt generalization on unseen prompts.
We demonstrate empirically on 15 benchmarks that Bayesian prompt learning provides an appropriate coverage of the prompt space.
arXiv Detail & Related papers (2022-10-05T17:05:56Z) - Prompt-aligned Gradient for Prompt Tuning [63.346864107288766]
We present Prompt-aligned Gradient, dubbed ProGrad, to prevent prompt tuning from forgetting the general knowledge learned from vision-language models (VLMs)
ProGrad only updates the prompt whose gradient is aligned to the "general direction", which is represented as the gradient of the KL loss of the pre-defined prompt prediction.
Experiments demonstrate the stronger few-shot generalization ability of ProGrad over state-of-the-art prompt tuning methods.
arXiv Detail & Related papers (2022-05-30T06:05:21Z) - UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection [103.06327681038304]
We propose a supervised open-set benchmark composed of multiple virtual scenes for video anomaly detection.
Unlike existing data sets, we introduce abnormal events annotated at the pixel level at training time.
We show that UBnormal can enhance the performance of a state-of-the-art anomaly detection framework.
arXiv Detail & Related papers (2021-11-16T17:28:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.