Few-Shot Anomaly Detection via Category-Agnostic Registration Learning
- URL: http://arxiv.org/abs/2406.08810v2
- Date: Tue, 08 Oct 2024 07:48:31 GMT
- Title: Few-Shot Anomaly Detection via Category-Agnostic Registration Learning
- Authors: Chaoqin Huang, Haoyan Guan, Aofan Jiang, Ya Zhang, Michael Spratling, Xinchao Wang, Yanfeng Wang,
- Abstract summary: Most existing anomaly detection methods require a dedicated model for each category.
This article proposes a novel few-shot AD (FSAD) framework.
It is the first FSAD method that requires no model fine-tuning for novel categories.
- Score: 65.64252994254268
- License:
- Abstract: Most existing anomaly detection (AD) methods require a dedicated model for each category. Such a paradigm, despite its promising results, is computationally expensive and inefficient, thereby failing to meet the requirements for realworld applications. Inspired by how humans detect anomalies, by comparing a query image to known normal ones, this article proposes a novel few-shot AD (FSAD) framework. Using a training set of normal images from various categories, registration, aiming to align normal images of the same categories, is leveraged as the proxy task for self-supervised category-agnostic representation learning. At test time, an image and its corresponding support set, consisting of a few normal images from the same category, are supplied, and anomalies are identified by comparing the registered features of the test image to its corresponding support image features. Such a setup enables the model to generalize to novel test categories. It is, to our best knowledge, the first FSAD method that requires no model fine-tuning for novel categories: enabling a single model to be applied to all categories. Extensive experiments demonstrate the effectiveness of the proposed method. Particularly, it improves the current state-of-the-art (SOTA) for FSAD by 11.3% and 8.3% on the MVTec and MPDD benchmarks, respectively. The source code is available at https://github.com/Haoyan-Guan/CAReg.
Related papers
- Absolute-Unified Multi-Class Anomaly Detection via Class-Agnostic Distribution Alignment [27.375917265177847]
Unsupervised anomaly detection (UAD) methods build separate models for each object category.
Recent studies have proposed to train a unified model for multiple classes, namely model-unified UAD.
We present a simple yet powerful method to address multi-class anomaly detection without any class information, namely textitabsolute-unified UAD.
arXiv Detail & Related papers (2024-03-31T15:50:52Z) - Diversified in-domain synthesis with efficient fine-tuning for few-shot
classification [64.86872227580866]
Few-shot image classification aims to learn an image classifier using only a small set of labeled examples per class.
We propose DISEF, a novel approach which addresses the generalization challenge in few-shot learning using synthetic data.
We validate our method in ten different benchmarks, consistently outperforming baselines and establishing a new state-of-the-art for few-shot classification.
arXiv Detail & Related papers (2023-12-05T17:18:09Z) - Incremental Generalized Category Discovery [26.028970894707204]
We explore the problem of Incremental Generalized Category Discovery (IGCD)
This is a challenging category incremental learning setting where the goal is to develop models that can correctly categorize images from previously seen categories.
We present a new method for IGCD which combines non-parametric categorization with efficient image sampling to mitigate catastrophic forgetting.
arXiv Detail & Related papers (2023-04-27T16:27:11Z) - Zero-shot Model Diagnosis [80.36063332820568]
A common approach to evaluate deep learning models is to build a labeled test set with attributes of interest and assess how well it performs.
This paper argues the case that Zero-shot Model Diagnosis (ZOOM) is possible without the need for a test set nor labeling.
arXiv Detail & Related papers (2023-03-27T17:59:33Z) - Registration based Few-Shot Anomaly Detection [19.46397954621789]
This paper considers few-shot anomaly detection (FSAD), a practical yet under-studied setting for anomaly detection (AD)
Existing FSAD studies follow the one-model-per-category learning paradigm used for standard AD.
Inspired by how humans detect anomalies, we here leverage registration, an image alignment task that is inherently generalizable across categories.
During testing, the anomalies are identified by comparing the registered features of the test image and its corresponding support (normal) images.
arXiv Detail & Related papers (2022-07-15T09:20:13Z) - Query Adaptive Few-Shot Object Detection with Heterogeneous Graph
Convolutional Networks [33.446875089255876]
Few-shot object detection (FSOD) aims to detect never-seen objects using few examples.
We propose a novel FSOD model using heterogeneous graph convolutional networks.
arXiv Detail & Related papers (2021-12-17T22:08:15Z) - Background Splitting: Finding Rare Classes in a Sea of Background [55.03789745276442]
We focus on the real-world problem of training accurate deep models for image classification of a small number of rare categories.
In these scenarios, almost all images belong to the background category in the dataset (>95% of the dataset is background)
We demonstrate that both standard fine-tuning approaches and state-of-the-art approaches for training on imbalanced datasets do not produce accurate deep models in the presence of this extreme imbalance.
arXiv Detail & Related papers (2020-08-28T23:05:15Z) - Diverse Image Generation via Self-Conditioned GANs [56.91974064348137]
We train a class-conditional GAN model without using manually annotated class labels.
Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space.
Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them.
arXiv Detail & Related papers (2020-06-18T17:56:03Z) - I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively [135.7695909882746]
We name the MAximum Discrepancy (MAD) competition.
We adaptively sample a small test set from an arbitrarily large corpus of unlabeled images.
Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers.
arXiv Detail & Related papers (2020-02-25T03:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.