AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection
- URL: http://arxiv.org/abs/2310.18961v7
- Date: Thu, 14 Mar 2024 14:08:05 GMT
- Title: AnomalyCLIP: Object-agnostic Prompt Learning for Zero-shot Anomaly Detection
- Authors: Qihang Zhou, Guansong Pang, Yu Tian, Shibo He, Jiming Chen,
- Abstract summary: AnomalyCLIP learns object-agnostic text prompts to capture generic normality and abnormality in an image.
It achieves superior zero-shot performance of detecting and segmenting anomalies in datasets of highly diverse class semantics.
- Score: 30.679012320439625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot anomaly detection (ZSAD) requires detection models trained using auxiliary data to detect anomalies without any training sample in a target dataset. It is a crucial task when training data is not accessible due to various concerns, eg, data privacy, yet it is challenging since the models need to generalize to anomalies across different domains where the appearance of foreground objects, abnormal regions, and background features, such as defects/tumors on different products/organs, can vary significantly. Recently large pre-trained vision-language models (VLMs), such as CLIP, have demonstrated strong zero-shot recognition ability in various vision tasks, including anomaly detection. However, their ZSAD performance is weak since the VLMs focus more on modeling the class semantics of the foreground objects rather than the abnormality/normality in the images. In this paper we introduce a novel approach, namely AnomalyCLIP, to adapt CLIP for accurate ZSAD across different domains. The key insight of AnomalyCLIP is to learn object-agnostic text prompts that capture generic normality and abnormality in an image regardless of its foreground objects. This allows our model to focus on the abnormal image regions rather than the object semantics, enabling generalized normality and abnormality recognition on diverse types of objects. Large-scale experiments on 17 real-world anomaly detection datasets show that AnomalyCLIP achieves superior zero-shot performance of detecting and segmenting anomalies in datasets of highly diverse class semantics from various defect inspection and medical imaging domains. Code will be made available at https://github.com/zqhang/AnomalyCLIP.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly Detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con2, which addresses this problem by setting normal training data into distinct contexts.
Our approach achieves state-of-the-art performance on various benchmarks while exhibiting superior performance in a more realistic healthcare setting.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - FiLo: Zero-Shot Anomaly Detection by Fine-Grained Description and High-Quality Localization [31.854923603517264]
We propose a novel zero-shot anomaly detection (ZSAD) method called FiLo.
FG-Des introduces fine-grained anomaly descriptions for each category using Large Language Models (LLMs)
HQ-Loc, utilizing Grounding DINO for preliminary localization, position-enhanced text prompts, facilitates more accurate localization of anomalies of different sizes and shapes.
arXiv Detail & Related papers (2024-04-21T14:22:04Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - Learn Suspected Anomalies from Event Prompts for Video Anomaly Detection [49.91075101563298]
A novel framework is proposed to guide the learning of suspected anomalies from event prompts.
It enables a new multi-prompt learning process to constrain the visual-semantic features across all videos.
Our proposed model outperforms most state-of-the-art methods in terms of AP or AUC.
arXiv Detail & Related papers (2024-03-02T10:42:47Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - PAD: A Dataset and Benchmark for Pose-agnostic Anomaly Detection [28.973078719467516]
We develop Multi-pose Anomaly Detection dataset and Pose-agnostic Anomaly Detection benchmark.
Specifically, we build MAD using 20 complex-shaped LEGO toys with various poses, and high-quality and diverse 3D anomalies in both simulated and real environments.
We also propose a novel method OmniposeAD, trained using MAD, specifically designed for pose-agnostic anomaly detection.
arXiv Detail & Related papers (2023-10-11T17:59:56Z) - Domain-Generalized Textured Surface Anomaly Detection [29.88664324332402]
Anomaly detection aims to identify abnormal data that deviates from the normal ones, while requiring a sufficient amount of normal data to train the model for performing this task.
In this paper, we address the task of domain-generalized textured surface anomaly detection.
Our model is expected to be generalized to an unseen textured surface of interest, in which only a small number of normal data can be observed during testing.
arXiv Detail & Related papers (2022-03-23T10:01:35Z) - A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video [120.18562044084678]
Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years.
We propose a background-agnostic framework that learns from training videos containing only normal events.
arXiv Detail & Related papers (2020-08-27T18:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.