Explainable Deep Few-shot Anomaly Detection with Deviation Networks
- URL: http://arxiv.org/abs/2108.00462v1
- Date: Sun, 1 Aug 2021 14:33:17 GMT
- Title: Explainable Deep Few-shot Anomaly Detection with Deviation Networks
- Authors: Guansong Pang, Choubo Ding, Chunhua Shen, Anton van den Hengel
- Abstract summary: We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
- Score: 123.46611927225963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing anomaly detection paradigms overwhelmingly focus on training
detection models using exclusively normal data or unlabeled data (mostly normal
samples). One notorious issue with these approaches is that they are weak in
discriminating anomalies from normal samples due to the lack of the knowledge
about the anomalies. Here, we study the problem of few-shot anomaly detection,
in which we aim at using a few labeled anomaly examples to train
sample-efficient discriminative detection models. To address this problem, we
introduce a novel weakly-supervised anomaly detection framework to train
detection models without assuming the examples illustrating all possible
classes of anomaly.
Specifically, the proposed approach learns discriminative normality
(regularity) by leveraging the labeled anomalies and a prior probability to
enforce expressive representations of normality and unbounded deviated
representations of abnormality. This is achieved by an end-to-end optimization
of anomaly scores with a neural deviation learning, in which the anomaly scores
of normal samples are imposed to approximate scalar scores drawn from the prior
while that of anomaly examples is enforced to have statistically significant
deviations from these sampled scores in the upper tail. Furthermore, our model
is optimized to learn fine-grained normality and abnormality by top-K
multiple-instance-learning-based feature subspace deviation learning, allowing
more generalized representations. Comprehensive experiments on nine real-world
image anomaly detection benchmarks show that our model is substantially more
sample-efficient and robust, and performs significantly better than
state-of-the-art competing methods in both closed-set and open-set settings.
Our model can also offer explanation capability as a result of its prior-driven
anomaly score learning. Code and datasets are available at:
https://git.io/DevNet.
Related papers
- Adaptive Deviation Learning for Visual Anomaly Detection with Data Contamination [20.4008901760593]
We introduce a systematic adaptive method that employs deviation learning to compute anomaly scores end-to-end.
Our proposed method surpasses competing techniques and exhibits both stability and robustness in the presence of data contamination.
arXiv Detail & Related papers (2024-11-14T16:10:15Z) - Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Few-shot Anomaly Detection in Text with Deviation Learning [13.957106119614213]
We introduce FATE, a framework that learns anomaly scores explicitly in an end-to-end method using deviation learning.
Our model is optimized to learn the distinct behavior of anomalies by utilizing a multi-head self-attention layer and multiple instance learning approaches.
arXiv Detail & Related papers (2023-08-22T20:40:21Z) - SaliencyCut: Augmenting Plausible Anomalies for Anomaly Detection [24.43321988051129]
We propose a novel saliency-guided data augmentation method, SaliencyCut, to produce pseudo but more common anomalies.
We then design a novel patch-wise residual module in the anomaly learning head to extract and assess the fine-grained anomaly features from each sample.
arXiv Detail & Related papers (2023-06-14T08:55:36Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.