Deep Visual Anomaly detection with Negative Learning
- URL: http://arxiv.org/abs/2105.11058v1
- Date: Mon, 24 May 2021 01:48:44 GMT
- Title: Deep Visual Anomaly detection with Negative Learning
- Authors: Jin-Ha Lee, Marcella Astrid, Muhammad Zaigham Zaheer, Seung-Ik Lee
- Abstract summary: In this paper, we propose anomaly detection with negative learning (ADNL), which employs the negative learning concept for the enhancement of anomaly detection.
The idea is to limit the reconstruction capability of a generative model using the given a small amount of anomaly examples.
This way, the network not only learns to reconstruct normal data but also encloses the normal distribution far from the possible distribution of anomalies.
- Score: 18.79849041106952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increase in the learning capability of deep convolution-based
architectures, various applications of such models have been proposed over
time. In the field of anomaly detection, improvements in deep learning opened
new prospects of exploration for the researchers whom tried to automate the
labor-intensive features of data collection. First, in terms of data
collection, it is impossible to anticipate all the anomalies that might exist
in a given environment. Second, assuming we limit the possibilities of
anomalies, it will still be hard to record all these scenarios for the sake of
training a model. Third, even if we manage to record a significant amount of
abnormal data, it's laborious to annotate this data on pixel or even frame
level. Various approaches address the problem by proposing one-class
classification using generative models trained on only normal data. In such
methods, only the normal data is used, which is abundantly available and
doesn't require significant human input. However, these are trained with only
normal data and at the test time, given abnormal data as input, may often
generate normal-looking output. This happens due to the hallucination
characteristic of generative models. Next, these systems are designed to not
use abnormal examples during the training. In this paper, we propose anomaly
detection with negative learning (ADNL), which employs the negative learning
concept for the enhancement of anomaly detection by utilizing a very small
number of labeled anomaly data as compared with the normal data during
training. The idea is to limit the reconstruction capability of a generative
model using the given a small amount of anomaly examples. This way, the network
not only learns to reconstruct normal data but also encloses the normal
distribution far from the possible distribution of anomalies.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - LARA: A Light and Anti-overfitting Retraining Approach for Unsupervised
Time Series Anomaly Detection [49.52429991848581]
We propose a Light and Anti-overfitting Retraining Approach (LARA) for deep variational auto-encoder based time series anomaly detection methods (VAEs)
This work aims to make three novel contributions: 1) the retraining process is formulated as a convex problem and can converge at a fast rate as well as prevent overfitting; 2) designing a ruminate block, which leverages the historical data without the need to store them; and 3) mathematically proving that when fine-tuning the latent vector and reconstructed data, the linear formations can achieve the least adjusting errors between the ground truths and the fine-tuned ones.
arXiv Detail & Related papers (2023-10-09T12:36:16Z) - Few-shot Anomaly Detection in Text with Deviation Learning [13.957106119614213]
We introduce FATE, a framework that learns anomaly scores explicitly in an end-to-end method using deviation learning.
Our model is optimized to learn the distinct behavior of anomalies by utilizing a multi-head self-attention layer and multiple instance learning approaches.
arXiv Detail & Related papers (2023-08-22T20:40:21Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Learning Not to Reconstruct Anomalies [14.632592282260363]
Autoencoder (AE) is trained to reconstruct the input with training set consisting only of normal data.
AE is then expected to well reconstruct the normal data while poorly reconstructing the anomalous data.
We propose a novel methodology to train AEs with the objective of reconstructing only normal data, regardless of the input.
arXiv Detail & Related papers (2021-10-19T05:22:38Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Discriminative-Generative Dual Memory Video Anomaly Detection [81.09977516403411]
Recently, people tried to use a few anomalies for video anomaly detection (VAD) instead of only normal data during the training process.
We propose a DiscRiminative-gEnerative duAl Memory (DREAM) anomaly detection model to take advantage of a few anomalies and solve data imbalance.
arXiv Detail & Related papers (2021-04-29T15:49:01Z) - MOCCA: Multi-Layer One-Class ClassificAtion for Anomaly Detection [16.914663209964697]
We propose our deep learning approach to the anomaly detection problem named Multi-LayerOne-Class Classification (MOCCA)
We explicitly leverage the piece-wise nature of deep neural networks by exploiting information extracted at different depths to detect abnormal data instances.
We show that our method reaches superior performances compared to the state-of-the-art approaches available in the literature.
arXiv Detail & Related papers (2020-12-09T08:32:56Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.