P-WAE: Generalized Patch-Wasserstein Autoencoder for Anomaly Screening
- URL: http://arxiv.org/abs/2108.03815v1
- Date: Mon, 9 Aug 2021 05:31:45 GMT
- Title: P-WAE: Generalized Patch-Wasserstein Autoencoder for Anomaly Screening
- Authors: Yurong Chen
- Abstract summary: We propose a novel Patch-wise Wasserstein AutoEncoder (P-WAE) architecture to alleviate those challenges.
In particular, a patch-wise variational inference model coupled with solving the jigsaw puzzle is designed.
Comprehensive experiments, conducted on the MVTec AD dataset, demonstrate the superior performance of our propo
- Score: 17.24628770042803
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To mitigate the inspector's workload and improve the quality of the product,
computer vision-based anomaly detection (AD) techniques are gradually deployed
in real-world industrial scenarios. Recent anomaly analysis benchmarks progress
to generative models. The aim is to model the defect-free distribution so that
anomalies can be classified as out-of-distribution samples. Nevertheless, there
are two disturbing factors that need researchers and deployers to prioritize:
(i) the simplistic prior latent distribution inducing limited expressive
capability; (ii) the collapsed mutual-dependent features resulting in poor
generalization. In this paper, we propose a novel Patch-wise Wasserstein
AutoEncoder (P-WAE) architecture to alleviate those challenges. In particular,
a patch-wise variational inference model coupled with solving the jigsaw puzzle
is designed, which is a simple yet effective way to increase the expressiveness
and complexity of the latent manifold. This alleviates the blurry
reconstruction problem. In addition, the Hilbert-Schmidt Independence Criterion
(HSIC) bottleneck is introduced to constrain the over-regularization
representation. Comprehensive experiments, conducted on the MVTec AD dataset,
demonstrate the superior performance of our propo
Related papers
- Feature Attenuation of Defective Representation Can Resolve Incomplete Masking on Anomaly Detection [1.0358639819750703]
In unsupervised anomaly detection (UAD) research, it is necessary to develop a computationally efficient and scalable solution.
We revisit the reconstruction-by-inpainting approach and rethink to improve it by analyzing strengths and weaknesses.
We propose Feature Attenuation of Defective Representation (FADeR) that only employs two layers which attenuates feature information of anomaly reconstruction.
arXiv Detail & Related papers (2024-07-05T15:44:53Z) - A Contrastive Variational Graph Auto-Encoder for Node Clustering [10.52321770126932]
State-of-the-art clustering methods have numerous challenges.
Existing VGAEs do not account for the discrepancy between the inference and generative models.
Our solution has two mechanisms to control the trade-off between Feature Randomness and Feature Drift.
arXiv Detail & Related papers (2023-12-28T05:07:57Z) - How to train your VAE [0.0]
Variational Autoencoders (VAEs) have become a cornerstone in generative modeling and representation learning within machine learning.
This paper explores interpreting the Kullback-Leibler (KL) Divergence, a critical component within the Evidence Lower Bound (ELBO)
The proposed method redefines the ELBO with a mixture of Gaussians for the posterior probability, introduces a regularization term, and employs a PatchGAN discriminator to enhance texture realism.
arXiv Detail & Related papers (2023-09-22T19:52:28Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - LafitE: Latent Diffusion Model with Feature Editing for Unsupervised
Multi-class Anomaly Detection [12.596635603629725]
We develop a unified model to detect anomalies from objects belonging to multiple classes when only normal data is accessible.
We first explore the generative-based approach and investigate latent diffusion models for reconstruction.
We introduce a feature editing strategy that modifies the input feature space of the diffusion model to further alleviate identity shortcuts''
arXiv Detail & Related papers (2023-07-16T14:41:22Z) - Single-Model Attribution of Generative Models Through Final-Layer Inversion [16.506531590300806]
We propose a new approach for single-model attribution in the open-world setting based on final-layer inversion and anomaly detection.
We show that the utilized final-layer inversion can be reduced to a convex lasso optimization problem, making our approach theoretically sound and computationally efficient.
arXiv Detail & Related papers (2023-05-26T13:06:38Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.