Unsupervised Two-Stage Anomaly Detection
- URL: http://arxiv.org/abs/2103.11671v1
- Date: Mon, 22 Mar 2021 08:57:27 GMT
- Title: Unsupervised Two-Stage Anomaly Detection
- Authors: Yunfei Liu, Chaoqun Zhuang, Feng Lu
- Abstract summary: Anomaly detection from a single image is challenging since anomaly data is always rare and can be with highly unpredictable types.
We propose a two-stage approach, which generates high-fidelity yet anomaly-free reconstructions.
Our method outperforms state-of-the-arts on four anomaly detection datasets.
- Score: 18.045265572566276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly detection from a single image is challenging since anomaly data is
always rare and can be with highly unpredictable types. With only anomaly-free
data available, most existing methods train an AutoEncoder to reconstruct the
input image and find the difference between the input and output to identify
the anomalous region. However, such methods face a potential problem - a coarse
reconstruction generates extra image differences while a high-fidelity one may
draw in the anomaly. In this paper, we solve this contradiction by proposing a
two-stage approach, which generates high-fidelity yet anomaly-free
reconstructions. Our Unsupervised Two-stage Anomaly Detection (UTAD) relies on
two technical components, namely the Impression Extractor (IE-Net) and the
Expert-Net. The IE-Net and Expert-Net accomplish the two-stage anomaly-free
image reconstruction task while they also generate intuitive intermediate
results, making the whole UTAD interpretable. Extensive experiments show that
our method outperforms state-of-the-arts on four anomaly detection datasets
with different types of real-world objects and textures.
Related papers
- AnoPLe: Few-Shot Anomaly Detection via Bi-directional Prompt Learning with Only Normal Samples [6.260747047974035]
AnoPLe is a multi-modal prompt learning method designed for anomaly detection without prior knowledge of anomalies.
The experimental results demonstrate that AnoPLe achieves strong FAD performance, recording 94.1% and 86.2% Image AUROC on MVTec-AD and VisA respectively.
arXiv Detail & Related papers (2024-08-24T08:41:19Z) - DualAnoDiff: Dual-Interrelated Diffusion Model for Few-Shot Anomaly Image Generation [40.257604426546216]
The performance of anomaly inspection in industrial manufacturing is constrained by the scarcity of anomaly data.
Existing anomaly generation methods suffer from limited diversity in the generated anomalies.
We propose DualAnoDiff, a novel diffusion-based few-shot anomaly image generation model.
arXiv Detail & Related papers (2024-08-24T08:09:32Z) - GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features [68.14842693208465]
GeneralAD is an anomaly detection framework designed to operate in semantic, near-distribution, and industrial settings.
We propose a novel self-supervised anomaly generation module that employs straightforward operations like noise addition and shuffling to patch features.
We extensively evaluated our approach on ten datasets, achieving state-of-the-art results in six and on-par performance in the remaining.
arXiv Detail & Related papers (2024-07-17T09:27:41Z) - A Hierarchically Feature Reconstructed Autoencoder for Unsupervised Anomaly Detection [8.512184778338806]
It consists of a well pre-trained encoder to extract hierarchical feature representations and a decoder to reconstruct these intermediate features from the encoder.
The anomalies can be detected when the decoder fails to reconstruct features well, and then errors of hierarchical feature reconstruction are aggregated into an anomaly map to achieve anomaly localization.
Experiment results show that the proposed method outperforms the state-of-the-art methods on MNIST, Fashion-MNIST, CIFAR-10, and MVTec Anomaly Detection datasets.
arXiv Detail & Related papers (2024-05-15T07:20:27Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Generating and Reweighting Dense Contrastive Patterns for Unsupervised
Anomaly Detection [59.34318192698142]
We introduce a prior-less anomaly generation paradigm and develop an innovative unsupervised anomaly detection framework named GRAD.
PatchDiff effectively expose various types of anomaly patterns.
experiments on both MVTec AD and MVTec LOCO datasets also support the aforementioned observation.
arXiv Detail & Related papers (2023-12-26T07:08:06Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Multi-Perspective Anomaly Detection [3.3511723893430476]
We build upon the deep support vector data description algorithm and address multi-perspective anomaly detection.
We employ different augmentation techniques with a denoising process to deal with scarce one-class data.
We evaluate our approach on the new dices dataset using images from two different perspectives and also benchmark on the standard MNIST dataset.
arXiv Detail & Related papers (2021-05-20T17:07:36Z) - OIAD: One-for-all Image Anomaly Detection with Disentanglement Learning [23.48763375455514]
We propose a One-for-all Image Anomaly Detection system based on disentangled learning using only clean samples.
Our experiments with three datasets show that OIAD can detect over $90%$ of anomalies while maintaining a low false alarm rate.
arXiv Detail & Related papers (2020-01-18T09:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.