What do we learn? Debunking the Myth of Unsupervised Outlier Detection
- URL: http://arxiv.org/abs/2206.03698v1
- Date: Wed, 8 Jun 2022 06:36:16 GMT
- Title: What do we learn? Debunking the Myth of Unsupervised Outlier Detection
- Authors: Cosmin I. Bercea, Daniel Rueckert, Julia A. Schnabel
- Abstract summary: We investigate what auto-encoders actually learn when they are posed to solve two different tasks.
We show that state-of-the-art (SOTA) AEs are either unable to constrain the latent manifold and allow reconstruction of abnormal patterns, or they are failing to accurately restore the inputs from their latent distribution.
We propose novel deformable auto-encoders (AEMorphus) to learn perceptually aware global image priors and locally adapt their morphometry.
- Score: 9.599183039166284
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Even though auto-encoders (AEs) have the desirable property of learning
compact representations without labels and have been widely applied to
out-of-distribution (OoD) detection, they are generally still poorly understood
and are used incorrectly in detecting outliers where the normal and abnormal
distributions are strongly overlapping. In general, the learned manifold is
assumed to contain key information that is only important for describing
samples within the training distribution, and that the reconstruction of
outliers leads to high residual errors. However, recent work suggests that AEs
are likely to be even better at reconstructing some types of OoD samples. In
this work, we challenge this assumption and investigate what auto-encoders
actually learn when they are posed to solve two different tasks. First, we
propose two metrics based on the Fr\'echet inception distance (FID) and
confidence scores of a trained classifier to assess whether AEs can learn the
training distribution and reliably recognize samples from other domains.
Second, we investigate whether AEs are able to synthesize normal images from
samples with abnormal regions, on a more challenging lung pathology detection
task. We have found that state-of-the-art (SOTA) AEs are either unable to
constrain the latent manifold and allow reconstruction of abnormal patterns, or
they are failing to accurately restore the inputs from their latent
distribution, resulting in blurred or misaligned reconstructions. We propose
novel deformable auto-encoders (MorphAEus) to learn perceptually aware global
image priors and locally adapt their morphometry based on estimated dense
deformation fields. We demonstrate superior performance over unsupervised
methods in detecting OoD and pathology.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Rethinking Autoencoders for Medical Anomaly Detection from A Theoretical Perspective [27.6598870874816]
This study provides a theoretical foundation for AE-based reconstruction methods in anomaly detection.
By leveraging information theory, we reveal that the key to improving AE in anomaly detection lies in minimizing the information entropy of latent vectors.
This is the first effort to theoretically clarify the principles and design philosophy of AE for anomaly detection.
arXiv Detail & Related papers (2024-03-14T11:51:01Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Probabilistic Robust Autoencoders for Anomaly Detection [7.362415721170984]
We propose a new type of autoencoder (AE) which we term Probabilistic Robust autoencoder (PRAE)
PRAE is designed to simultaneously remove outliers and identify a low-dimensional representation for the inlier samples.
We prove that the solution to PRAE is equivalent to the solution of RAE and demonstrate using extensive simulations that PRAE is at par with state-of-the-art methods for anomaly detection.
arXiv Detail & Related papers (2021-10-01T15:46:38Z) - Manifolds for Unsupervised Visual Anomaly Detection [79.22051549519989]
Unsupervised learning methods that don't necessarily encounter anomalies in training would be immensely useful.
We develop a novel hyperspherical Variational Auto-Encoder (VAE) via stereographic projections with a gyroplane layer.
We present state-of-the-art results on visual anomaly benchmarks in precision manufacturing and inspection, demonstrating real-world utility in industrial AI scenarios.
arXiv Detail & Related papers (2020-06-19T20:41:58Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.