SaliencyCut: Augmenting Plausible Anomalies for Anomaly Detection
- URL: http://arxiv.org/abs/2306.08366v2
- Date: Wed, 1 Nov 2023 09:46:25 GMT
- Title: SaliencyCut: Augmenting Plausible Anomalies for Anomaly Detection
- Authors: Jianan Ye, Yijie Hu, Xi Yang, Qiu-Feng Wang, Chao Huang, Kaizhu Huang
- Abstract summary: We propose a novel saliency-guided data augmentation method, SaliencyCut, to produce pseudo but more common anomalies.
We then design a novel patch-wise residual module in the anomaly learning head to extract and assess the fine-grained anomaly features from each sample.
- Score: 24.43321988051129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly detection under open-set scenario is a challenging task that requires
learning discriminative fine-grained features to detect anomalies that were
even unseen during training. As a cheap yet effective approach, data
augmentation has been widely used to create pseudo anomalies for better
training of such models. Recent wisdom of augmentation methods focuses on
generating random pseudo instances that may lead to a mixture of augmented
instances with seen anomalies, or out of the typical range of anomalies. To
address this issue, we propose a novel saliency-guided data augmentation
method, SaliencyCut, to produce pseudo but more common anomalies which tend to
stay in the plausible range of anomalies. Furthermore, we deploy a two-head
learning strategy consisting of normal and anomaly learning heads, to learn the
anomaly score of each sample. Theoretical analyses show that this mechanism
offers a more tractable and tighter lower bound of the data log-likelihood. We
then design a novel patch-wise residual module in the anomaly learning head to
extract and assess the fine-grained anomaly features from each sample,
facilitating the learning of discriminative representations of anomaly
instances. Extensive experiments conducted on six real-world anomaly detection
datasets demonstrate the superiority of our method to competing methods under
various settings.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - AGAD: Adversarial Generative Anomaly Detection [12.68966318231776]
Anomaly detection suffered from the lack of anomalies due to the diversity of abnormalities and the difficulties of obtaining large-scale anomaly data.
We propose Adversarial Generative Anomaly Detection (AGAD), a self-contrast-based anomaly detection paradigm.
Our method generates pseudo-anomaly data for both supervised and semi-supervised anomaly detection scenarios.
arXiv Detail & Related papers (2023-04-09T10:40:02Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Using Ensemble Classifiers to Detect Incipient Anomalies [12.947364178385637]
Incipient anomalies present milder symptoms compared to severe ones.
These anomalies can be easily mistaken as normal operating conditions.
We show that ensemble learning methods can give improved performance on incipient anomalies.
arXiv Detail & Related papers (2020-08-20T00:00:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.