Improving auto-encoder novelty detection using channel attention and
entropy minimization
- URL: http://arxiv.org/abs/2007.01682v2
- Date: Mon, 10 May 2021 05:36:11 GMT
- Title: Improving auto-encoder novelty detection using channel attention and
entropy minimization
- Authors: Miao Tian, Dongyan Guo, Ying Cui, Xiang Pan, Shengyong Chen
- Abstract summary: We introduce attention mechanism to improve the performance of auto-encoder for novelty detection.
Under the action of attention mechanism, auto-encoder can pay more attention to the representation of inlier samples through adversarial training.
We apply the information entropy into the latent layer to make it sparse and constrain the expression of diversity.
- Score: 36.58514518563204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novelty detection is a important research area which mainly solves the
classification problem of inliers which usually consists of normal samples and
outliers composed of abnormal samples. Auto-encoder is often used for novelty
detection. However, the generalization ability of the auto-encoder may cause
the undesirable reconstruction of abnormal elements and reduce the
identification ability of the model. To solve the problem, we focus on the
perspective of better reconstructing the normal samples as well as retaining
the unique information of normal samples to improve the performance of
auto-encoder for novelty detection. Firstly, we introduce attention mechanism
into the task. Under the action of attention mechanism, auto-encoder can pay
more attention to the representation of inlier samples through adversarial
training. Secondly, we apply the information entropy into the latent layer to
make it sparse and constrain the expression of diversity. Experimental results
on three public datasets show that the proposed method achieves comparable
performance compared with previous popular approaches.
Related papers
- Targeted collapse regularized autoencoder for anomaly detection: black hole at the center [3.924781781769534]
Autoencoders can generalize beyond the normal class and achieve a small reconstruction error on some anomalous samples.
We propose a remarkably straightforward alternative: instead of adding neural network components, involved computations, and cumbersome training, we complement the reconstruction loss with a computationally light term.
This mitigates the black-box nature of autoencoder-based anomaly detection algorithms and offers an avenue for further investigation of advantages, fail cases, and potential new directions.
arXiv Detail & Related papers (2023-06-22T01:33:47Z) - Anomaly Detection with Adversarially Learned Perturbations of Latent
Space [9.473040033926264]
Anomaly detection is to identify samples that do not conform to the distribution of the normal data.
In this work, we have designed an adversarial framework consisting of two competing components, an Adversarial Distorter, and an Autoencoder.
The proposed method outperforms the existing state-of-the-art methods in anomaly detection on image and video datasets.
arXiv Detail & Related papers (2022-07-03T19:32:00Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Discriminative Feature Learning Framework with Gradient Preference for
Anomaly Detection [6.026443496519457]
We propose a novel discriminative feature learning framework with gradient preference for anomaly detection.
Specifically, we design a gradient preference based selector to store powerful feature points in space and then construct a feature repository.
Our method outperforms the state-of-the-art in few shot anomaly detection.
arXiv Detail & Related papers (2022-04-23T08:05:15Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - ESAD: End-to-end Deep Semi-supervised Anomaly Detection [85.81138474858197]
We propose a new objective function that measures the KL-divergence between normal and anomalous data.
The proposed method significantly outperforms several state-of-the-arts on multiple benchmark datasets.
arXiv Detail & Related papers (2020-12-09T08:16:35Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.