Anomaly component analysis
- URL: http://arxiv.org/abs/2312.16139v1
- Date: Tue, 26 Dec 2023 17:57:46 GMT
- Title: Anomaly component analysis
- Authors: Romain Valla, Pavlo Mozharovskyi, Florence d'Alch\'e-Buc
- Abstract summary: We introduce a new statistical tool dedicated for exploratory analysis of abnormal observations using data depth as a score.
Anomaly component analysis (shortly ACA) is a method that searches a low-dimensional data representation that best visualises and explains anomalies.
- Score: 3.046315755726937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: At the crossway of machine learning and data analysis, anomaly detection aims
at identifying observations that exhibit abnormal behaviour. Be it measurement
errors, disease development, severe weather, production quality default(s)
(items) or failed equipment, financial frauds or crisis events, their on-time
identification and isolation constitute an important task in almost any area of
industry and science. While a substantial body of literature is devoted to
detection of anomalies, little attention is payed to their explanation. This is
the case mostly due to intrinsically non-supervised nature of the task and
non-robustness of the exploratory methods like principal component analysis
(PCA).
We introduce a new statistical tool dedicated for exploratory analysis of
abnormal observations using data depth as a score. Anomaly component analysis
(shortly ACA) is a method that searches a low-dimensional data representation
that best visualises and explains anomalies. This low-dimensional
representation not only allows to distinguish groups of anomalies better than
the methods of the state of the art, but as well provides a -- linear in
variables and thus easily interpretable -- explanation for anomalies. In a
comparative simulation and real-data study, ACA also proves advantageous for
anomaly analysis with respect to methods present in the literature.
Related papers
- MeLIAD: Interpretable Few-Shot Anomaly Detection with Metric Learning and Entropy-based Scoring [2.394081903745099]
We propose MeLIAD, a novel methodology for interpretable anomaly detection.
MeLIAD is based on metric learning and achieves interpretability by design without relying on any prior distribution assumptions of true anomalies.
Experiments on five public benchmark datasets, including quantitative and qualitative evaluation of interpretability, demonstrate that MeLIAD achieves improved anomaly detection and localization performance.
arXiv Detail & Related papers (2024-09-20T16:01:43Z) - AnomalyDiffusion: Few-Shot Anomaly Image Generation with Diffusion Model [59.08735812631131]
Anomaly inspection plays an important role in industrial manufacture.
Existing anomaly inspection methods are limited in their performance due to insufficient anomaly data.
We propose AnomalyDiffusion, a novel diffusion-based few-shot anomaly generation model.
arXiv Detail & Related papers (2023-12-10T05:13:40Z) - Leveraging healthy population variability in deep learning unsupervised
anomaly detection in brain FDG PET [0.0]
Unsupervised anomaly detection is a popular approach for the analysis of neuroimaging data.
It relies on building a subject-specific model of healthy appearance to which a subject's image can be compared to detect anomalies.
In the literature, it is common for anomaly detection to rely on analysing the residual image between the subject's image and its pseudo-healthy reconstruction.
arXiv Detail & Related papers (2023-11-20T10:28:10Z) - Rare Yet Popular: Evidence and Implications from Labeled Datasets for
Network Anomaly Detection [9.717823994163277]
We present a systematic analysis of available public and private ground truth for anomaly detection in the context of network environments.
Our analysis reveals that, while anomalies are, by definition, temporally rare events, their spatial characterization clearly shows some type of anomalies are significantly more popular than others.
arXiv Detail & Related papers (2022-11-18T10:14:03Z) - Causality-Based Multivariate Time Series Anomaly Detection [63.799474860969156]
We formulate the anomaly detection problem from a causal perspective and view anomalies as instances that do not follow the regular causal mechanism to generate the multivariate data.
We then propose a causality-based anomaly detection approach, which first learns the causal structure from data and then infers whether an instance is an anomaly relative to the local causal mechanism.
We evaluate our approach with both simulated and public datasets as well as a case study on real-world AIOps applications.
arXiv Detail & Related papers (2022-06-30T06:00:13Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Understanding the Effect of Bias in Deep Anomaly Detection [15.83398707988473]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data.
Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples.
In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection.
arXiv Detail & Related papers (2021-05-16T03:55:02Z) - Anomaly detection using principles of human perception [0.0]
Unsupervised anomaly detection algorithm is developed that is simple, real-time and parameter-free.
The idea is to assume anomalies are observations that are unexpected to occur with respect to certain groupings made by the majority of the data.
arXiv Detail & Related papers (2021-03-23T05:46:27Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.