Do autoencoders need a bottleneck for anomaly detection?
- URL: http://arxiv.org/abs/2202.12637v1
- Date: Fri, 25 Feb 2022 11:57:58 GMT
- Title: Do autoencoders need a bottleneck for anomaly detection?
- Authors: Bang Xiang Yong, Alexandra Brintrup
- Abstract summary: Learning the identity function renders the AEs useless for anomaly detection.
In this work, we investigate the value of non-bottlenecked AEs.
We propose the infinitely-wide AEs as an extreme example of non-bottlenecked AEs.
- Score: 78.24964622317634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common belief in designing deep autoencoders (AEs), a type of unsupervised
neural network, is that a bottleneck is required to prevent learning the
identity function. Learning the identity function renders the AEs useless for
anomaly detection. In this work, we challenge this limiting belief and
investigate the value of non-bottlenecked AEs.
The bottleneck can be removed in two ways: (1) overparameterising the latent
layer, and (2) introducing skip connections. However, limited works have
reported on the use of one of the ways. For the first time, we carry out
extensive experiments covering various combinations of bottleneck removal
schemes, types of AEs and datasets. In addition, we propose the infinitely-wide
AEs as an extreme example of non-bottlenecked AEs.
Their improvement over the baseline implies learning the identity function is
not trivial as previously assumed. Moreover, we find that non-bottlenecked
architectures (highest AUROC=0.857) can outperform their bottlenecked
counterparts (highest AUROC=0.696) on the popular task of CIFAR (inliers) vs
SVHN (anomalies), among other tasks, shedding light on the potential of
developing non-bottlenecked AEs for improving anomaly detection.
Related papers
- Open-Vocabulary Video Anomaly Detection [57.552523669351636]
Video anomaly detection (VAD) with weak supervision has achieved remarkable performance in utilizing video-level labels to discriminate whether a video frame is normal or abnormal.
Recent studies attempt to tackle a more realistic setting, open-set VAD, which aims to detect unseen anomalies given seen anomalies and normal videos.
This paper takes a step further and explores open-vocabulary video anomaly detection (OVVAD), in which we aim to leverage pre-trained large models to detect and categorize seen and unseen anomalies.
arXiv Detail & Related papers (2023-11-13T02:54:17Z) - Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly
Detectors [117.61449210940955]
We propose an efficient abnormal event detection model based on a lightweight masked auto-encoder (AE) applied at the video frame level.
We introduce an approach to weight tokens based on motion gradients, thus shifting the focus from the static background scene to the foreground objects.
We generate synthetic abnormal events to augment the training videos, and task the masked AE model to jointly reconstruct the original frames.
arXiv Detail & Related papers (2023-06-21T06:18:05Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - Be Your Own Neighborhood: Detecting Adversarial Example by the
Neighborhood Relations Built on Self-Supervised Learning [64.78972193105443]
This paper presents a novel AE detection framework, named trustworthy for predictions.
performs the detection by distinguishing the AE's abnormal relation with its augmented versions.
An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the label.
arXiv Detail & Related papers (2022-08-31T08:18:44Z) - Detecting and Recovering Adversarial Examples from Extracting Non-robust
and Highly Predictive Adversarial Perturbations [15.669678743693947]
adversarial examples (AEs) are maliciously designed to fool target models.
Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples.
We propose a model-free AEs detection method, the whole process of which is free from querying the victim model.
arXiv Detail & Related papers (2022-06-30T08:48:28Z) - What do we learn? Debunking the Myth of Unsupervised Outlier Detection [9.599183039166284]
We investigate what auto-encoders actually learn when they are posed to solve two different tasks.
We show that state-of-the-art (SOTA) AEs are either unable to constrain the latent manifold and allow reconstruction of abnormal patterns, or they are failing to accurately restore the inputs from their latent distribution.
We propose novel deformable auto-encoders (AEMorphus) to learn perceptually aware global image priors and locally adapt their morphometry.
arXiv Detail & Related papers (2022-06-08T06:36:16Z) - Probabilistic Robust Autoencoders for Anomaly Detection [7.362415721170984]
We propose a new type of autoencoder (AE) which we term Probabilistic Robust autoencoder (PRAE)
PRAE is designed to simultaneously remove outliers and identify a low-dimensional representation for the inlier samples.
We prove that the solution to PRAE is equivalent to the solution of RAE and demonstrate using extensive simulations that PRAE is at par with state-of-the-art methods for anomaly detection.
arXiv Detail & Related papers (2021-10-01T15:46:38Z) - Efficient Person Search: An Anchor-Free Approach [86.45858994806471]
Person search aims to simultaneously localize and identify a query person from realistic, uncropped images.
To achieve this goal, state-of-the-art models typically add a re-id branch upon two-stage detectors like Faster R-CNN.
In this work, we present an anchor-free approach to efficiently tackling this challenging task, by introducing the following dedicated designs.
arXiv Detail & Related papers (2021-09-01T07:01:33Z) - Dual Adversarial Auto-Encoders for Clustering [152.84443014554745]
We propose Dual Adversarial Auto-encoder (Dual-AAE) for unsupervised clustering.
By performing variational inference on the objective function of Dual-AAE, we derive a new reconstruction loss which can be optimized by training a pair of Auto-encoders.
Experiments on four benchmarks show that Dual-AAE achieves superior performance over state-of-the-art clustering methods.
arXiv Detail & Related papers (2020-08-23T13:16:34Z) - ARAE: Adversarially Robust Training of Autoencoders Improves Novelty
Detection [6.992807725367106]
Autoencoders (AE) have been widely employed to approach the novelty detection problem.
We propose a novel AE that can learn more semantically meaningful features.
We show that despite using a much simpler architecture, the proposed AE outperforms or is competitive to state-of-the-art on three benchmark datasets.
arXiv Detail & Related papers (2020-03-12T09:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.