AnoPLe: Few-Shot Anomaly Detection via Bi-directional Prompt Learning with Only Normal Samples
- URL: http://arxiv.org/abs/2408.13516v1
- Date: Sat, 24 Aug 2024 08:41:19 GMT
- Title: AnoPLe: Few-Shot Anomaly Detection via Bi-directional Prompt Learning with Only Normal Samples
- Authors: Yujin Lee, Seoyoon Jang, Hyunsoo Yoon,
- Abstract summary: AnoPLe is a multi-modal prompt learning method designed for anomaly detection without prior knowledge of anomalies.
The experimental results demonstrate that AnoPLe achieves strong FAD performance, recording 94.1% and 86.2% Image AUROC on MVTec-AD and VisA respectively.
- Score: 6.260747047974035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot Anomaly Detection (FAD) poses significant challenges due to the limited availability of training samples and the frequent absence of abnormal samples. Previous approaches often rely on annotations or true abnormal samples to improve detection, but such textual or visual cues are not always accessible. To address this, we introduce AnoPLe, a multi-modal prompt learning method designed for anomaly detection without prior knowledge of anomalies. AnoPLe simulates anomalies and employs bidirectional coupling of textual and visual prompts to facilitate deep interaction between the two modalities. Additionally, we integrate a lightweight decoder with a learnable multi-view signal, trained on multi-scale images to enhance local semantic comprehension. To further improve performance, we align global and local semantics, enriching the image-level understanding of anomalies. The experimental results demonstrate that AnoPLe achieves strong FAD performance, recording 94.1% and 86.2% Image AUROC on MVTec-AD and VisA respectively, with only around a 1% gap compared to the SoTA, despite not being exposed to true anomalies. Code is available at https://github.com/YoojLee/AnoPLe.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - MSFlow: Multi-Scale Flow-based Framework for Unsupervised Anomaly
Detection [124.52227588930543]
Unsupervised anomaly detection (UAD) attracts a lot of research interest and drives widespread applications.
An inconspicuous yet powerful statistics model, the normalizing flows, is appropriate for anomaly detection and localization in an unsupervised fashion.
We propose a novel Multi-Scale Flow-based framework dubbed MSFlow composed of asymmetrical parallel flows followed by a fusion flow.
Our MSFlow achieves a new state-of-the-art with a detection AUORC score of up to 99.7%, localization AUCROC score of 98.8%, and PRO score of 97.1%.
arXiv Detail & Related papers (2023-08-29T13:38:35Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Unsupervised Two-Stage Anomaly Detection [18.045265572566276]
Anomaly detection from a single image is challenging since anomaly data is always rare and can be with highly unpredictable types.
We propose a two-stage approach, which generates high-fidelity yet anomaly-free reconstructions.
Our method outperforms state-of-the-arts on four anomaly detection datasets.
arXiv Detail & Related papers (2021-03-22T08:57:27Z) - Constrained Contrastive Distribution Learning for Unsupervised Anomaly
Detection and Localisation in Medical Images [23.79184121052212]
Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i.e., healthy) images.
We propose a novel self-supervised representation learning method, called Constrained Contrastive Distribution learning for anomaly detection (CCD)
Our method outperforms current state-of-the-art UAD approaches on three different colonoscopy and fundus screening datasets.
arXiv Detail & Related papers (2021-03-05T01:56:58Z) - Multiresolution Knowledge Distillation for Anomaly Detection [10.799350080453982]
Unsupervised representation learning has proved to be a critical component of anomaly detection/localization in images.
The sample size is not often large enough to learn a rich generalizable representation through conventional techniques.
Here, we propose to use the "distillation" of features at various layers of an expert network, pre-trained on ImageNet, into a simpler cloner network to tackle both issues.
arXiv Detail & Related papers (2020-11-22T21:16:35Z) - OIAD: One-for-all Image Anomaly Detection with Disentanglement Learning [23.48763375455514]
We propose a One-for-all Image Anomaly Detection system based on disentangled learning using only clean samples.
Our experiments with three datasets show that OIAD can detect over $90%$ of anomalies while maintaining a low false alarm rate.
arXiv Detail & Related papers (2020-01-18T09:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.