Probabilistic Robust Autoencoders for Anomaly Detection
- URL: http://arxiv.org/abs/2110.00494v1
- Date: Fri, 1 Oct 2021 15:46:38 GMT
- Title: Probabilistic Robust Autoencoders for Anomaly Detection
- Authors: Yariv Aizenbud, Ofir Lindenbaum, Yuval Kluger
- Abstract summary: We propose a new type of autoencoder (AE) which we term Probabilistic Robust autoencoder (PRAE)
PRAE is designed to simultaneously remove outliers and identify a low-dimensional representation for the inlier samples.
We prove that the solution to PRAE is equivalent to the solution of RAE and demonstrate using extensive simulations that PRAE is at par with state-of-the-art methods for anomaly detection.
- Score: 7.362415721170984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Empirical observations often consist of anomalies (or outliers) that
contaminate the data. Accurate identification of anomalous samples is crucial
for the success of downstream data analysis tasks. To automatically identify
anomalies, we propose a new type of autoencoder (AE) which we term
Probabilistic Robust autoencoder (PRAE). PRAE is designed to simultaneously
remove outliers and identify a low-dimensional representation for the inlier
samples. We first describe Robust AE (RAE) as a model that aims to split the
data to inlier samples from which a low dimensional representation is learned
via an AE, and anomalous (outlier) samples that are excluded as they do not fit
the low dimensional representation. Robust AE minimizes the reconstruction of
the AE while attempting to incorporate as many observations as possible. This
could be realized by subtracting from the reconstruction term an $\ell_0$ norm
counting the number of selected observations. Since the $\ell_0$ norm is not
differentiable, we propose two probabilistic relaxations for the RAE approach
and demonstrate that they can effectively identify anomalies. We prove that the
solution to PRAE is equivalent to the solution of RAE and demonstrate using
extensive simulations that PRAE is at par with state-of-the-art methods for
anomaly detection.
Related papers
- Exploiting Autoencoder's Weakness to Generate Pseudo Anomalies [17.342474659784823]
A typical approach to anomaly detection is to train an autoencoder (AE) with normal data only so that it learns the patterns or representations of the normal data.
We propose creating pseudo anomalies from learned adaptive noise by exploiting the weakness of AE, i.e., reconstructing anomalies too well.
arXiv Detail & Related papers (2024-05-09T16:22:24Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - An Outlier Exposure Approach to Improve Visual Anomaly Detection
Performance for Mobile Robots [76.36017224414523]
We consider the problem of building visual anomaly detection systems for mobile robots.
Standard anomaly detection models are trained using large datasets composed only of non-anomalous data.
We tackle the problem of exploiting these data to improve the performance of a Real-NVP anomaly detection model.
arXiv Detail & Related papers (2022-09-20T15:18:13Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - What do we learn? Debunking the Myth of Unsupervised Outlier Detection [9.599183039166284]
We investigate what auto-encoders actually learn when they are posed to solve two different tasks.
We show that state-of-the-art (SOTA) AEs are either unable to constrain the latent manifold and allow reconstruction of abnormal patterns, or they are failing to accurately restore the inputs from their latent distribution.
We propose novel deformable auto-encoders (AEMorphus) to learn perceptually aware global image priors and locally adapt their morphometry.
arXiv Detail & Related papers (2022-06-08T06:36:16Z) - Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection [16.436293069942312]
Autoencoders (AEs) often start reconstructing anomalies as well which depletes their anomaly detection performance.
We propose a temporal pseudo anomaly synthesizer that generates fake-anomalies using only normal data.
An AE is then trained to maximize the reconstruction loss on pseudo anomalies while minimizing this loss on normal data.
arXiv Detail & Related papers (2021-10-19T07:08:44Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly
Detection [9.19194451963411]
Semi-supervised anomaly detection aims to detect anomalies from normal samples using a model that is trained on normal data.
We propose a method, DASVDD, that jointly learns the parameters of an autoencoder while minimizing the volume of an enclosing hyper-sphere on its latent representation.
arXiv Detail & Related papers (2021-06-09T21:57:41Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.