ARAE: Adversarially Robust Training of Autoencoders Improves Novelty
Detection
- URL: http://arxiv.org/abs/2003.05669v2
- Date: Sat, 24 Oct 2020 19:42:01 GMT
- Title: ARAE: Adversarially Robust Training of Autoencoders Improves Novelty
Detection
- Authors: Mohammadreza Salehi, Atrin Arya, Barbod Pajoum, Mohammad Otoofi,
Amirreza Shaeiri, Mohammad Hossein Rohban, Hamid R. Rabiee
- Abstract summary: Autoencoders (AE) have been widely employed to approach the novelty detection problem.
We propose a novel AE that can learn more semantically meaningful features.
We show that despite using a much simpler architecture, the proposed AE outperforms or is competitive to state-of-the-art on three benchmark datasets.
- Score: 6.992807725367106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoencoders (AE) have recently been widely employed to approach the novelty
detection problem. Trained only on the normal data, the AE is expected to
reconstruct the normal data effectively while fail to regenerate the anomalous
data, which could be utilized for novelty detection. However, in this paper, it
is demonstrated that this does not always hold. AE often generalizes so
perfectly that it can also reconstruct the anomalous data well. To address this
problem, we propose a novel AE that can learn more semantically meaningful
features. Specifically, we exploit the fact that adversarial robustness
promotes learning of meaningful features. Therefore, we force the AE to learn
such features by penalizing networks with a bottleneck layer that is unstable
against adversarial perturbations. We show that despite using a much simpler
architecture in comparison to the prior methods, the proposed AE outperforms or
is competitive to state-of-the-art on three benchmark datasets.
Related papers
- Constricting Normal Latent Space for Anomaly Detection with Normal-only Training Data [11.237938539765825]
Autoencoder (AE) is typically trained to reconstruct the data.
During test time, since AE is not trained using real anomalies, it is expected to poorly reconstruct the anomalous data.
We propose to limit the reconstruction capability of AE by introducing a novel latent constriction loss.
arXiv Detail & Related papers (2024-03-24T19:22:15Z) - Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach [49.995833831087175]
This work proposes a novel method for generating generic Video-temporal PAs by inpainting a masked out region of an image.
In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting.
Our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting.
arXiv Detail & Related papers (2023-11-27T13:14:06Z) - Unraveling the "Anomaly" in Time Series Anomaly Detection: A
Self-supervised Tri-domain Solution [89.16750999704969]
Anomaly labels hinder traditional supervised models in time series anomaly detection.
Various SOTA deep learning techniques, such as self-supervised learning, have been introduced to tackle this issue.
We propose a novel self-supervised learning based Tri-domain Anomaly Detector (TriAD)
arXiv Detail & Related papers (2023-11-19T05:37:18Z) - Patch-wise Auto-Encoder for Visual Anomaly Detection [1.7546477549938133]
We propose a novel patch-wise auto-encoder framework, which aims at enhancing the reconstruction ability of AE to anomalies instead of weakening it.
Our method is simple and efficient. It advances the state-of-the-art performances on Mvtec AD benchmark, which proves the effectiveness of our model.
arXiv Detail & Related papers (2023-08-01T10:15:15Z) - Synthetic Pseudo Anomalies for Unsupervised Video Anomaly Detection: A
Simple yet Efficient Framework based on Masked Autoencoder [1.9511777443446219]
We propose a simple yet efficient framework for video anomaly detection.
The pseudo anomaly samples are synthesized from only normal data by embedding random mask tokens without extra data processing.
We also propose a normalcy consistency training strategy that encourages the AEs to better learn the regular knowledge from normal and corresponding pseudo anomaly data.
arXiv Detail & Related papers (2023-03-09T08:33:38Z) - Be Your Own Neighborhood: Detecting Adversarial Example by the
Neighborhood Relations Built on Self-Supervised Learning [64.78972193105443]
This paper presents a novel AE detection framework, named trustworthy for predictions.
performs the detection by distinguishing the AE's abnormal relation with its augmented versions.
An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the label.
arXiv Detail & Related papers (2022-08-31T08:18:44Z) - Do autoencoders need a bottleneck for anomaly detection? [78.24964622317634]
Learning the identity function renders the AEs useless for anomaly detection.
In this work, we investigate the value of non-bottlenecked AEs.
We propose the infinitely-wide AEs as an extreme example of non-bottlenecked AEs.
arXiv Detail & Related papers (2022-02-25T11:57:58Z) - Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent
Space Distribution Matching in WAE [51.09507030387935]
Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution.
We propose to use the contrastive learning framework that has been shown to be effective for self-supervised representation learning, as a means to resolve this problem.
We show that using the contrastive learning framework to optimize the WAE loss achieves faster convergence and more stable optimization compared with existing popular algorithms for WAE.
arXiv Detail & Related papers (2021-10-19T22:55:47Z) - Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection [16.436293069942312]
Autoencoders (AEs) often start reconstructing anomalies as well which depletes their anomaly detection performance.
We propose a temporal pseudo anomaly synthesizer that generates fake-anomalies using only normal data.
An AE is then trained to maximize the reconstruction loss on pseudo anomalies while minimizing this loss on normal data.
arXiv Detail & Related papers (2021-10-19T07:08:44Z) - Learning Not to Reconstruct Anomalies [14.632592282260363]
Autoencoder (AE) is trained to reconstruct the input with training set consisting only of normal data.
AE is then expected to well reconstruct the normal data while poorly reconstructing the anomalous data.
We propose a novel methodology to train AEs with the objective of reconstructing only normal data, regardless of the input.
arXiv Detail & Related papers (2021-10-19T05:22:38Z) - Unsupervised and self-adaptative techniques for cross-domain person
re-identification [82.54691433502335]
Person Re-Identification (ReID) across non-overlapping cameras is a challenging task.
Unsupervised Domain Adaptation (UDA) is a promising alternative, as it performs feature-learning adaptation from a model trained on a source to a target domain without identity-label annotation.
In this paper, we propose a novel UDA-based ReID method that takes advantage of triplets of samples created by a new offline strategy.
arXiv Detail & Related papers (2021-03-21T23:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.