Explicit Boundary Guided Semi-Push-Pull Contrastive Learning for
Supervised Anomaly Detection
- URL: http://arxiv.org/abs/2207.01463v2
- Date: Fri, 7 Apr 2023 11:31:55 GMT
- Title: Explicit Boundary Guided Semi-Push-Pull Contrastive Learning for
Supervised Anomaly Detection
- Authors: Xincheng Yao and Ruoqi Li and Jing Zhang and Jun Sun and Chongyang
Zhang
- Abstract summary: Most anomaly detection (AD) models are learned using only normal samples in an unsupervised way.
We propose a novel explicit boundary guided semi-push-pull contrastive learning mechanism.
- Score: 14.27685411466415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most anomaly detection (AD) models are learned using only normal samples in
an unsupervised way, which may result in ambiguous decision boundary and
insufficient discriminability. In fact, a few anomaly samples are often
available in real-world applications, the valuable knowledge of known anomalies
should also be effectively exploited. However, utilizing a few known anomalies
during training may cause another issue that the model may be biased by those
known anomalies and fail to generalize to unseen anomalies. In this paper, we
tackle supervised anomaly detection, i.e., we learn AD models using a few
available anomalies with the objective to detect both the seen and unseen
anomalies. We propose a novel explicit boundary guided semi-push-pull
contrastive learning mechanism, which can enhance model's discriminability
while mitigating the bias issue. Our approach is based on two core designs:
First, we find an explicit and compact separating boundary as the guidance for
further feature learning. As the boundary only relies on the normal feature
distribution, the bias problem caused by a few known anomalies can be
alleviated. Second, a boundary guided semi-push-pull loss is developed to only
pull the normal features together while pushing the abnormal features apart
from the separating boundary beyond a certain margin region. In this way, our
model can form a more explicit and discriminative decision boundary to
distinguish known and also unseen anomalies from normal samples more
effectively. Code will be available at https://github.com/xcyao00/BGAD.
Related papers
- Fine-grained Abnormality Prompt Learning for Zero-shot Anomaly Detection [88.34095233600719]
FAPrompt is a novel framework designed to learn Fine-grained Abnormality Prompts for more accurate ZSAD.
It substantially outperforms state-of-the-art methods by at least 3%-5% AUC/AP in both image- and pixel-level ZSAD tasks.
arXiv Detail & Related papers (2024-10-14T08:41:31Z) - Anomaly Heterogeneity Learning for Open-set Supervised Anomaly Detection [26.08881235151695]
Open-set supervised anomaly detection (OSAD) aims at utilizing a few samples of anomaly classes seen during training to detect unseen anomalies.
We introduce a novel approach, namely Anomaly Heterogeneity Learning (AHL), that simulates a diverse set of heterogeneous anomaly distributions.
We show that AHL can 1) substantially enhance different state-of-the-art OSAD models in detecting seen and unseen anomalies, and 2) effectively generalize to unseen anomalies in new domains.
arXiv Detail & Related papers (2023-10-19T14:47:11Z) - SaliencyCut: Augmenting Plausible Anomalies for Anomaly Detection [24.43321988051129]
We propose a novel saliency-guided data augmentation method, SaliencyCut, to produce pseudo but more common anomalies.
We then design a novel patch-wise residual module in the anomaly learning head to extract and assess the fine-grained anomaly features from each sample.
arXiv Detail & Related papers (2023-06-14T08:55:36Z) - AD-MERCS: Modeling Normality and Abnormality in Unsupervised Anomaly
Detection [12.070251470948772]
We present AD-MERCS, an unsupervised approach to anomaly detection that explicitly aims at doing both.
AD-MERCS identifies multiple subspaces of the instance space within which patterns exist, and identifies conditions that characterize instances that deviate from these patterns.
Experiments show that this modeling of both normality and abnormality makes the anomaly detector performant on a wide range of types of anomalies.
arXiv Detail & Related papers (2023-05-22T12:09:14Z) - Prototypical Residual Networks for Anomaly Detection and Localization [80.5730594002466]
We propose a framework called Prototypical Residual Network (PRN)
PRN learns feature residuals of varying scales and sizes between anomalous and normal patterns to accurately reconstruct the segmentation maps of anomalous regions.
We present a variety of anomaly generation strategies that consider both seen and unseen appearance variance to enlarge and diversify anomalies.
arXiv Detail & Related papers (2022-12-05T05:03:46Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.