FADE: Few-shot/zero-shot Anomaly Detection Engine using Large Vision-Language Model
- URL: http://arxiv.org/abs/2409.00556v1
- Date: Sat, 31 Aug 2024 23:05:56 GMT
- Title: FADE: Few-shot/zero-shot Anomaly Detection Engine using Large Vision-Language Model
- Authors: Yuanwei Li, Elizaveta Ivanova, Martins Bruveris,
- Abstract summary: Few-shot/zero-shot anomaly detection is important for quality inspection in the manufacturing industry.
We propose the Few-shot/zero-shot Anomaly Engine Detection (FADE) which leverages the vision-language CLIP model and adjusts it for the purpose of anomaly detection.
FADE outperforms other state-of-the-art methods in anomaly segmentation with pixel-AUROC of 89.6% (91.5%) in zero-shot and 95.4% (97.5%) in 1-normal-shot.
- Score: 0.9226774742769024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic image anomaly detection is important for quality inspection in the manufacturing industry. The usual unsupervised anomaly detection approach is to train a model for each object class using a dataset of normal samples. However, a more realistic problem is zero-/few-shot anomaly detection where zero or only a few normal samples are available. This makes the training of object-specific models challenging. Recently, large foundation vision-language models have shown strong zero-shot performance in various downstream tasks. While these models have learned complex relationships between vision and language, they are not specifically designed for the tasks of anomaly detection. In this paper, we propose the Few-shot/zero-shot Anomaly Detection Engine (FADE) which leverages the vision-language CLIP model and adjusts it for the purpose of industrial anomaly detection. Specifically, we improve language-guided anomaly segmentation 1) by adapting CLIP to extract multi-scale image patch embeddings that are better aligned with language and 2) by automatically generating an ensemble of text prompts related to industrial anomaly detection. 3) We use additional vision-based guidance from the query and reference images to further improve both zero-shot and few-shot anomaly detection. On the MVTec-AD (and VisA) dataset, FADE outperforms other state-of-the-art methods in anomaly segmentation with pixel-AUROC of 89.6% (91.5%) in zero-shot and 95.4% (97.5%) in 1-normal-shot. Code is available at https://github.com/BMVC-FADE/BMVC-FADE.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.