Anomaly Detection with Score Distribution Discrimination
- URL: http://arxiv.org/abs/2306.14403v1
- Date: Mon, 26 Jun 2023 03:32:57 GMT
- Title: Anomaly Detection with Score Distribution Discrimination
- Authors: Minqi Jiang, Songqiao Han, Hailiang Huang
- Abstract summary: We propose to optimize the anomaly scoring function from the view of score distribution.
We design a novel loss function called Overlap loss that minimizes the overlap area between the score distributions of normal and abnormal samples.
- Score: 4.468952886990851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies give more attention to the anomaly detection (AD) methods that
can leverage a handful of labeled anomalies along with abundant unlabeled data.
These existing anomaly-informed AD methods rely on manually predefined score
target(s), e.g., prior constant or margin hyperparameter(s), to realize
discrimination in anomaly scores between normal and abnormal data. However,
such methods would be vulnerable to the existence of anomaly contamination in
the unlabeled data, and also lack adaptation to different data scenarios. In
this paper, we propose to optimize the anomaly scoring function from the view
of score distribution, thus better retaining the diversity and more
fine-grained information of input data, especially when the unlabeled data
contains anomaly noises in more practical AD scenarios. We design a novel loss
function called Overlap loss that minimizes the overlap area between the score
distributions of normal and abnormal samples, which no longer depends on prior
anomaly score targets and thus acquires adaptability to various datasets.
Overlap loss consists of Score Distribution Estimator and Overlap Area
Calculation, which are introduced to overcome challenges when estimating
arbitrary score distributions, and to ensure the boundness of training loss. As
a general loss component, Overlap loss can be effectively integrated into
multiple network architectures for constructing AD models. Extensive
experimental results indicate that Overlap loss based AD models significantly
outperform their state-of-the-art counterparts, and achieve better performance
on different types of anomalies.
Related papers
- Adaptive Deviation Learning for Visual Anomaly Detection with Data Contamination [20.4008901760593]
We introduce a systematic adaptive method that employs deviation learning to compute anomaly scores end-to-end.
Our proposed method surpasses competing techniques and exhibits both stability and robustness in the presence of data contamination.
arXiv Detail & Related papers (2024-11-14T16:10:15Z) - MeLIAD: Interpretable Few-Shot Anomaly Detection with Metric Learning and Entropy-based Scoring [2.394081903745099]
We propose MeLIAD, a novel methodology for interpretable anomaly detection.
MeLIAD is based on metric learning and achieves interpretability by design without relying on any prior distribution assumptions of true anomalies.
Experiments on five public benchmark datasets, including quantitative and qualitative evaluation of interpretability, demonstrate that MeLIAD achieves improved anomaly detection and localization performance.
arXiv Detail & Related papers (2024-09-20T16:01:43Z) - Performance Metric for Multiple Anomaly Score Distributions with Discrete Severity Levels [4.66313002591741]
We propose a weighted sum of the area under the receiver operating characteristic curve (WS-AUROC) for classifying severity levels based on anomaly scores.
We also propose an anomaly detector that achieves clear separation of distributions and outperforms the ablation models on the WS-AUROC and AUROC metrics.
arXiv Detail & Related papers (2024-08-09T02:17:49Z) - MSFlow: Multi-Scale Flow-based Framework for Unsupervised Anomaly
Detection [124.52227588930543]
Unsupervised anomaly detection (UAD) attracts a lot of research interest and drives widespread applications.
An inconspicuous yet powerful statistics model, the normalizing flows, is appropriate for anomaly detection and localization in an unsupervised fashion.
We propose a novel Multi-Scale Flow-based framework dubbed MSFlow composed of asymmetrical parallel flows followed by a fusion flow.
Our MSFlow achieves a new state-of-the-art with a detection AUORC score of up to 99.7%, localization AUCROC score of 98.8%, and PRO score of 97.1%.
arXiv Detail & Related papers (2023-08-29T13:38:35Z) - Few-shot Anomaly Detection in Text with Deviation Learning [13.957106119614213]
We introduce FATE, a framework that learns anomaly scores explicitly in an end-to-end method using deviation learning.
Our model is optimized to learn the distinct behavior of anomalies by utilizing a multi-head self-attention layer and multiple instance learning approaches.
arXiv Detail & Related papers (2023-08-22T20:40:21Z) - RoSAS: Deep Semi-Supervised Anomaly Detection with
Contamination-Resilient Continuous Supervision [21.393509817509464]
This paper proposes a novel semi-supervised anomaly detection method, which devises textitcontamination-resilient continuous supervisory signals
Our approach significantly outperforms state-of-the-art competitors by 20%-30% in AUC-PR.
arXiv Detail & Related papers (2023-07-25T04:04:49Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks [73.01104041298031]
TadGAN is an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs)
To capture the temporal correlations of time series, we use LSTM Recurrent Neural Networks as base models for Generators and Critics.
To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one.
arXiv Detail & Related papers (2020-09-16T15:52:04Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.