AltUB: Alternating Training Method to Update Base Distribution of
Normalizing Flow for Anomaly Detection
- URL: http://arxiv.org/abs/2210.14913v1
- Date: Wed, 26 Oct 2022 16:31:15 GMT
- Title: AltUB: Alternating Training Method to Update Base Distribution of
Normalizing Flow for Anomaly Detection
- Authors: Yeongmin Kim, Huiwon Jang, DongKeon Lee, and Ho-Jin Choi
- Abstract summary: Unsupervised anomaly detection is coming into the spotlight these days in various practical domains.
One of the major approaches for it is a normalizing flow which pursues the invertible transformation of a complex distribution as images into an easy distribution as N(0, I)
- Score: 1.3999481573773072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised anomaly detection is coming into the spotlight these days in
various practical domains due to the limited amount of anomaly data. One of the
major approaches for it is a normalizing flow which pursues the invertible
transformation of a complex distribution as images into an easy distribution as
N(0, I). In fact, algorithms based on normalizing flow like FastFlow and
CFLOW-AD establish state-of-the-art performance on unsupervised anomaly
detection tasks. Nevertheless, we investigate these algorithms convert normal
images into not N(0, I) as their destination, but an arbitrary normal
distribution. Moreover, their performances are often unstable, which is highly
critical for unsupervised tasks because data for validation are not provided.
To break through these observations, we propose a simple solution AltUB which
introduces alternating training to update the base distribution of normalizing
flow for anomaly detection. AltUB effectively improves the stability of
performance of normalizing flow. Furthermore, our method achieves the new
state-of-the-art performance of the anomaly segmentation task on the MVTec AD
dataset with 98.8% AUROC.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - MSFlow: Multi-Scale Flow-based Framework for Unsupervised Anomaly
Detection [124.52227588930543]
Unsupervised anomaly detection (UAD) attracts a lot of research interest and drives widespread applications.
An inconspicuous yet powerful statistics model, the normalizing flows, is appropriate for anomaly detection and localization in an unsupervised fashion.
We propose a novel Multi-Scale Flow-based framework dubbed MSFlow composed of asymmetrical parallel flows followed by a fusion flow.
Our MSFlow achieves a new state-of-the-art with a detection AUORC score of up to 99.7%, localization AUCROC score of 98.8%, and PRO score of 97.1%.
arXiv Detail & Related papers (2023-08-29T13:38:35Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - FastFlow: Unsupervised Anomaly Detection and Localization via 2D
Normalizing Flows [18.062328700407726]
We propose FastFlow as a plug-in module for arbitrary deep feature extractors such as ResNet and vision transformer.
In training phase, FastFlow learns to transform the input visual feature into a tractable distribution and obtains the likelihood to recognize anomalies in inference phase.
Our approach achieves 99.4% AUC in anomaly detection with high inference efficiency.
arXiv Detail & Related papers (2021-11-15T11:15:02Z) - Distribution Mismatch Correction for Improved Robustness in Deep Neural
Networks [86.42889611784855]
normalization methods increase the vulnerability with respect to noise and input corruptions.
We propose an unsupervised non-parametric distribution correction method that adapts the activation distribution of each layer.
In our experiments, we empirically show that the proposed method effectively reduces the impact of intense image corruptions.
arXiv Detail & Related papers (2021-10-05T11:36:25Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.