A Unified Model for Multi-class Anomaly Detection
- URL: http://arxiv.org/abs/2206.03687v1
- Date: Wed, 8 Jun 2022 06:05:09 GMT
- Title: A Unified Model for Multi-class Anomaly Detection
- Authors: Zhiyuan You, Lei Cui, Yujun Shen, Kai Yang, Xin Lu, Yu Zheng, Xinyi Le
- Abstract summary: UniAD accomplishes anomaly detection for multiple classes with a unified framework.
We evaluate our algorithm on MVTec-AD and CIFAR-10 datasets.
- Score: 33.534990722449066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the rapid advance of unsupervised anomaly detection, existing methods
require to train separate models for different objects. In this work, we
present UniAD that accomplishes anomaly detection for multiple classes with a
unified framework. Under such a challenging setting, popular reconstruction
networks may fall into an "identical shortcut", where both normal and anomalous
samples can be well recovered, and hence fail to spot outliers. To tackle this
obstacle, we make three improvements. First, we revisit the formulations of
fully-connected layer, convolutional layer, as well as attention layer, and
confirm the important role of query embedding (i.e., within attention layer) in
preventing the network from learning the shortcut. We therefore come up with a
layer-wise query decoder to help model the multi-class distribution. Second, we
employ a neighbor masked attention module to further avoid the information leak
from the input feature to the reconstructed output feature. Third, we propose a
feature jittering strategy that urges the model to recover the correct message
even with noisy inputs. We evaluate our algorithm on MVTec-AD and CIFAR-10
datasets, where we surpass the state-of-the-art alternatives by a sufficiently
large margin. For example, when learning a unified model for 15 categories in
MVTec-AD, we surpass the second competitor on the tasks of both anomaly
detection (from 88.1% to 96.5%) and anomaly localization (from 89.5% to 96.8%).
Code will be made publicly available.
Related papers
- Attend, Distill, Detect: Attention-aware Entropy Distillation for Anomaly Detection [4.0679780034913335]
A knowledge-distillation based multi-class anomaly detection promises a low latency with a reasonably good performance but with a significant drop as compared to one-class version.
We propose a DCAM (Distributed Convolutional Attention Module) which improves the distillation process between teacher and student networks.
arXiv Detail & Related papers (2024-05-10T13:25:39Z) - Toward Multi-class Anomaly Detection: Exploring Class-aware Unified Model against Inter-class Interference [67.36605226797887]
We introduce a Multi-class Implicit Neural representation Transformer for unified Anomaly Detection (MINT-AD)
By learning the multi-class distributions, the model generates class-aware query embeddings for the transformer decoder.
MINT-AD can project category and position information into a feature embedding space, further supervised by classification and prior probability loss functions.
arXiv Detail & Related papers (2024-03-21T08:08:31Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Locate and Verify: A Two-Stream Network for Improved Deepfake Detection [33.50963446256726]
Current deepfake detection methods are typically inadequate in generalizability.
We propose an innovative two-stream network that effectively enlarges the potential regions from which the model extracts evidence.
We also propose a Semi-supervised Patch Similarity Learning strategy to estimate patch-level forged location annotations.
arXiv Detail & Related papers (2023-09-20T08:25:19Z) - LafitE: Latent Diffusion Model with Feature Editing for Unsupervised
Multi-class Anomaly Detection [12.596635603629725]
We develop a unified model to detect anomalies from objects belonging to multiple classes when only normal data is accessible.
We first explore the generative-based approach and investigate latent diffusion models for reconstruction.
We introduce a feature editing strategy that modifies the input feature space of the diffusion model to further alleviate identity shortcuts''
arXiv Detail & Related papers (2023-07-16T14:41:22Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Anomaly Detection via Multi-Scale Contrasted Memory [3.0170109896527086]
We introduce a new two-stage anomaly detector which memorizes during training multi-scale normal prototypes to compute an anomaly deviation score.
Our model highly improves the state-of-the-art performance on a wide range of object, style and local anomalies with up to 35% error relative improvement on CIFAR-10.
arXiv Detail & Related papers (2022-11-16T16:58:04Z) - Self-Supervised Predictive Convolutional Attentive Block for Anomaly
Detection [97.93062818228015]
We propose to integrate the reconstruction-based functionality into a novel self-supervised predictive architectural building block.
Our block is equipped with a loss that minimizes the reconstruction error with respect to the masked area in the receptive field.
We demonstrate the generality of our block by integrating it into several state-of-the-art frameworks for anomaly detection on image and video.
arXiv Detail & Related papers (2021-11-17T13:30:31Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - MOCCA: Multi-Layer One-Class ClassificAtion for Anomaly Detection [16.914663209964697]
We propose our deep learning approach to the anomaly detection problem named Multi-LayerOne-Class Classification (MOCCA)
We explicitly leverage the piece-wise nature of deep neural networks by exploiting information extracted at different depths to detect abnormal data instances.
We show that our method reaches superior performances compared to the state-of-the-art approaches available in the literature.
arXiv Detail & Related papers (2020-12-09T08:32:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.