Deep Weakly-supervised Anomaly Detection
- URL: http://arxiv.org/abs/1910.13601v4
- Date: Mon, 5 Jun 2023 15:05:13 GMT
- Title: Deep Weakly-supervised Anomaly Detection
- Authors: Guansong Pang, Chunhua Shen, Huidong Jin, Anton van den Hengel
- Abstract summary: Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
- Score: 118.55172352231381
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent semi-supervised anomaly detection methods that are trained using small
labeled anomaly examples and large unlabeled data (mostly normal data) have
shown largely improved performance over unsupervised methods. However, these
methods often focus on fitting abnormalities illustrated by the given anomaly
examples only (i.e.,, seen anomalies), and consequently they fail to generalize
to those that are not, i.e., new types/classes of anomaly unseen during
training. To detect both seen and unseen anomalies, we introduce a novel deep
weakly-supervised approach, namely Pairwise Relation prediction Network
(PReNet), that learns pairwise relation features and anomaly scores by
predicting the relation of any two randomly sampled training instances, in
which the pairwise relation can be anomaly-anomaly, anomaly-unlabeled, or
unlabeled-unlabeled. Since unlabeled instances are mostly normal, the relation
prediction enforces a joint learning of anomaly-anomaly, anomaly-normal, and
normal-normal pairwise discriminative patterns, respectively. PReNet can then
detect any seen/unseen abnormalities that fit the learned pairwise abnormal
patterns, or deviate from the normal patterns. Further, this pairwise approach
also seamlessly and significantly augments the training anomaly data. Empirical
results on 12 real-world datasets show that PReNet significantly outperforms
nine competing methods in detecting seen and unseen anomalies. We also
theoretically and empirically justify the robustness of our model w.r.t.
anomaly contamination in the unlabeled data. The code is available at
https://github.com/mala-lab/PReNet.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Prototypical Residual Networks for Anomaly Detection and Localization [80.5730594002466]
We propose a framework called Prototypical Residual Network (PRN)
PRN learns feature residuals of varying scales and sizes between anomalous and normal patterns to accurately reconstruct the segmentation maps of anomalous regions.
We present a variety of anomaly generation strategies that consider both seen and unseen appearance variance to enlarge and diversify anomalies.
arXiv Detail & Related papers (2022-12-05T05:03:46Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.