Anomaly detection using principles of human perception
- URL: http://arxiv.org/abs/2103.12323v1
- Date: Tue, 23 Mar 2021 05:46:27 GMT
- Title: Anomaly detection using principles of human perception
- Authors: Nassir Mohammad
- Abstract summary: Unsupervised anomaly detection algorithm is developed that is simple, real-time and parameter-free.
The idea is to assume anomalies are observations that are unexpected to occur with respect to certain groupings made by the majority of the data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the fields of statistics and unsupervised machine learning a fundamental
and well-studied problem is anomaly detection. Although anomalies are difficult
to define, many algorithms have been proposed. Underlying the approaches is the
nebulous understanding that anomalies are rare, unusual or inconsistent with
the majority of data. The present work gives a philosophical approach to
clearly define anomalies and to develop an algorithm for their efficient
detection with minimal user intervention. Inspired by the Gestalt School of
Psychology and the Helmholtz principle of human perception, the idea is to
assume anomalies are observations that are unexpected to occur with respect to
certain groupings made by the majority of the data. Thus, under appropriate
random variable modelling anomalies are directly found in a set of data under a
uniform and independent random assumption of the distribution of constituent
elements of the observations; anomalies correspond to those observations where
the expectation of occurrence of the elements in a given view is $<1$. Starting
from fundamental principles of human perception an unsupervised anomaly
detection algorithm is developed that is simple, real-time and parameter-free.
Experiments suggest it as the prime choice for univariate data and it shows
promising performance on the detection of global anomalies in multivariate
data.
Related papers
- Anomaly Detection by Context Contrasting [57.695202846009714]
Anomaly detection focuses on identifying samples that deviate from the norm.
Recent advances in self-supervised learning have shown great promise in this regard.
We propose Con$$, which learns through context augmentations.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - AGAD: Adversarial Generative Anomaly Detection [12.68966318231776]
Anomaly detection suffered from the lack of anomalies due to the diversity of abnormalities and the difficulties of obtaining large-scale anomaly data.
We propose Adversarial Generative Anomaly Detection (AGAD), a self-contrast-based anomaly detection paradigm.
Our method generates pseudo-anomaly data for both supervised and semi-supervised anomaly detection scenarios.
arXiv Detail & Related papers (2023-04-09T10:40:02Z) - Causality-Based Multivariate Time Series Anomaly Detection [63.799474860969156]
We formulate the anomaly detection problem from a causal perspective and view anomalies as instances that do not follow the regular causal mechanism to generate the multivariate data.
We then propose a causality-based anomaly detection approach, which first learns the causal structure from data and then infers whether an instance is an anomaly relative to the local causal mechanism.
We evaluate our approach with both simulated and public datasets as well as a case study on real-world AIOps applications.
arXiv Detail & Related papers (2022-06-30T06:00:13Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Variation and generality in encoding of syntactic anomaly information in
sentence embeddings [7.132368785057315]
We explore fine-grained differences in anomaly encoding by designing probing tasks that vary the hierarchical level at which anomalies occur in a sentence.
We test not only models' ability to detect a given anomaly, but also the generality of the detected anomaly signal.
Results suggest that all models encode some information supporting anomaly detection, but detection performance varies between anomalies.
arXiv Detail & Related papers (2021-11-12T10:23:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Understanding the Effect of Bias in Deep Anomaly Detection [15.83398707988473]
Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data.
Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples.
In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection.
arXiv Detail & Related papers (2021-05-16T03:55:02Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.