SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation
- URL: http://arxiv.org/abs/2111.12896v1
- Date: Thu, 25 Nov 2021 03:53:43 GMT
- Title: SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation
- Authors: Yizhou Wang, Can Qin, Rongzhe Wei, Yi Xu, Yue Bai and Yun Fu
- Abstract summary: Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
- Score: 77.71161225100927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anomaly detection is a fundamental yet challenging problem in machine
learning due to the lack of label information. In this work, we propose a novel
and powerful framework, dubbed as SLA$^2$P, for unsupervised anomaly detection.
After extracting representative embeddings from raw data, we apply random
projections to the features and regard features transformed by different
projections as belonging to distinct pseudo classes. We then train a classifier
network on these transformed features to perform self-supervised learning. Next
we add adversarial perturbation to the transformed features to decrease their
softmax scores of the predicted labels and design anomaly scores based on the
predictive uncertainties of the classifier on these perturbed features. Our
motivation is that because of the relatively small number and the decentralized
modes of anomalies, 1) the pseudo label classifier's training concentrates more
on learning the semantic information of normal data rather than anomalous data;
2) the transformed features of the normal data are more robust to the
perturbations than those of the anomalies. Consequently, the perturbed
transformed features of anomalies fail to be classified well and accordingly
have lower anomaly scores than those of the normal samples. Extensive
experiments on image, text and inherently tabular benchmark datasets back up
our findings and indicate that SLA$^2$P achieves state-of-the-art results on
unsupervised anomaly detection tasks consistently.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - RoSAS: Deep Semi-Supervised Anomaly Detection with
Contamination-Resilient Continuous Supervision [21.393509817509464]
This paper proposes a novel semi-supervised anomaly detection method, which devises textitcontamination-resilient continuous supervisory signals
Our approach significantly outperforms state-of-the-art competitors by 20%-30% in AUC-PR.
arXiv Detail & Related papers (2023-07-25T04:04:49Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Understanding the Effect of Bias in Deep Anomaly Detection [15.83398707988473]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data.
Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples.
In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection.
arXiv Detail & Related papers (2021-05-16T03:55:02Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.