A Vision Inspired Neural Network for Unsupervised Anomaly Detection in
Unordered Data
- URL: http://arxiv.org/abs/2205.06716v1
- Date: Fri, 13 May 2022 15:50:57 GMT
- Title: A Vision Inspired Neural Network for Unsupervised Anomaly Detection in
Unordered Data
- Authors: Nassir Mohammad
- Abstract summary: A fundamental problem in the field of unsupervised machine learning is the detection of anomalies corresponding to rare and unusual observations of interest.
The present work seeks to establish important and practical connections between the approach used by the perception algorithm and prior decades of research in neurophysiology and computational neuroscience.
The algorithm is conceptualised as a neuron model which forms the kernel of an unsupervised neural network that learns to signal unexpected observations as anomalies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental problem in the field of unsupervised machine learning is the
detection of anomalies corresponding to rare and unusual observations of
interest; reasons include for their rejection, accommodation or further
investigation. Anomalies are intuitively understood to be something unusual or
inconsistent, whose occurrence sparks immediate attention. More formally
anomalies are those observations-under appropriate random variable
modelling-whose expectation of occurrence with respect to a grouping of prior
interest is less than one; such a definition and understanding has been used to
develop the parameter-free perception anomaly detection algorithm. The present
work seeks to establish important and practical connections between the
approach used by the perception algorithm and prior decades of research in
neurophysiology and computational neuroscience; particularly that of
information processing in the retina and visual cortex. The algorithm is
conceptualised as a neuron model which forms the kernel of an unsupervised
neural network that learns to signal unexpected observations as anomalies. Both
the network and neuron display properties observed in biological processes
including: immediate intelligence; parallel processing; redundancy; global
degradation; contrast invariance; parameter-free computation, dynamic
thresholds and non-linear processing. A robust and accurate model for anomaly
detection in univariate and multivariate data is built using this network as a
concrete application.
Related papers
- Explainable Online Unsupervised Anomaly Detection for Cyber-Physical Systems via Causal Discovery from Time Series [1.223779595809275]
State-of-the-art approaches based on deep learning via neural networks achieve outstanding performance at anomaly recognition.
We show that our method has higher training efficiency, outperforms the accuracy of state-of-the-art neural architectures.
arXiv Detail & Related papers (2024-04-15T15:42:12Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Graph Neural Network-Based Anomaly Detection in Multivariate Time Series [17.414474298706416]
We develop a new way to detect anomalies in high-dimensional time series data.
Our approach combines a structure learning approach with graph neural networks.
We show that our method detects anomalies more accurately than baseline approaches.
arXiv Detail & Related papers (2021-06-13T09:07:30Z) - A Survey on Anomaly Detection for Technical Systems using LSTM Networks [0.0]
Anomalies represent deviations from the intended system operation and can lead to decreased efficiency as well as partial or complete system failure.
In this article, a survey on state-of-the-art anomaly detection using deep neural and especially long short-term memory networks is conducted.
The investigated approaches are evaluated based on the application scenario, data and anomaly types as well as further metrics.
arXiv Detail & Related papers (2021-05-28T13:24:40Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - Anomaly detection using principles of human perception [0.0]
Unsupervised anomaly detection algorithm is developed that is simple, real-time and parameter-free.
The idea is to assume anomalies are observations that are unexpected to occur with respect to certain groupings made by the majority of the data.
arXiv Detail & Related papers (2021-03-23T05:46:27Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Bayesian Neural Networks [0.0]
We show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors.
We will also describe how both of these methods have substantial pitfalls when put into practice.
arXiv Detail & Related papers (2020-06-02T09:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.