Quadratic Neuron-empowered Heterogeneous Autoencoder for Unsupervised Anomaly Detection
- URL: http://arxiv.org/abs/2204.01707v2
- Date: Thu, 25 Apr 2024 08:24:36 GMT
- Title: Quadratic Neuron-empowered Heterogeneous Autoencoder for Unsupervised Anomaly Detection
- Authors: Jing-Xiao Liao, Bo-Jian Hou, Hang-Cheng Dong, Hao Zhang, Xiaoge Zhang, Jinwei Sun, Shiping Zhang, Feng-Lei Fan,
- Abstract summary: A novel type of neurons is proposed to replace the inner product in the current neuron with a simplified quadratic function.
To our best knowledge, it is the first heterogeneous autoencoder that is made of different types of neurons.
Experiments show that heterogeneous autoencoders perform competitively compared to other state-of-the-art models.
- Score: 8.271989261355785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by the complexity and diversity of biological neurons, a quadratic neuron is proposed to replace the inner product in the current neuron with a simplified quadratic function. Employing such a novel type of neurons offers a new perspective on developing deep learning. When analyzing quadratic neurons, we find that there exists a function such that a heterogeneous network can approximate it well with a polynomial number of neurons but a purely conventional or quadratic network needs an exponential number of neurons to achieve the same level of error. Encouraged by this inspiring theoretical result on heterogeneous networks, we directly integrate conventional and quadratic neurons in an autoencoder to make a new type of heterogeneous autoencoders. To our best knowledge, it is the first heterogeneous autoencoder that is made of different types of neurons. Next, we apply the proposed heterogeneous autoencoder to unsupervised anomaly detection for tabular data and bearing fault signals. The anomaly detection faces difficulties such as data unknownness, anomaly feature heterogeneity, and feature unnoticeability, which is suitable for the proposed heterogeneous autoencoder. Its high feature representation ability can characterize a variety of anomaly data (heterogeneity), discriminate the anomaly from the normal (unnoticeability), and accurately learn the distribution of normal samples (unknownness). Experiments show that heterogeneous autoencoders perform competitively compared to other state-of-the-art models.
Related papers
- Identifying Interpretable Visual Features in Artificial and Biological
Neural Systems [3.604033202771937]
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features.
Many neurons exhibit $textitmixed selectivity$, i.e., they represent multiple unrelated features.
We propose an automated method for quantifying visual interpretability and an approach for finding meaningful directions in network activation space.
arXiv Detail & Related papers (2023-10-17T17:41:28Z) - Neuroevolutionary algorithms driven by neuron coverage metrics for
semi-supervised classification [60.60571130467197]
In some machine learning applications the availability of labeled instances for supervised classification is limited while unlabeled instances are abundant.
We introduce neuroevolutionary approaches that exploit unlabeled instances by using neuron coverage metrics computed on the neural network architecture encoded by each candidate solution.
arXiv Detail & Related papers (2023-03-05T23:38:44Z) - A Vision Inspired Neural Network for Unsupervised Anomaly Detection in
Unordered Data [0.0]
A fundamental problem in the field of unsupervised machine learning is the detection of anomalies corresponding to rare and unusual observations of interest.
The present work seeks to establish important and practical connections between the approach used by the perception algorithm and prior decades of research in neurophysiology and computational neuroscience.
The algorithm is conceptualised as a neuron model which forms the kernel of an unsupervised neural network that learns to signal unexpected observations as anomalies.
arXiv Detail & Related papers (2022-05-13T15:50:57Z) - Fault-Tolerant Neural Networks from Biological Error Correction Codes [45.82537918529782]
In the grid cells of the mammalian cortex, analog error correction codes have been observed to protect states against neural spiking noise.
Here, we use these biological error correction codes to develop a universal fault-tolerant neural network that achieves reliable computation if the faultiness of each neuron lies below a sharp threshold.
The discovery of a phase transition from faulty to fault-tolerant neural computation suggests a mechanism for reliable computation in the cortex.
arXiv Detail & Related papers (2022-02-25T18:55:46Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Two-argument activation functions learn soft XOR operations like
cortical neurons [6.88204255655161]
We learn canonical activation functions with two input arguments, analogous to basal and apical dendrites.
Remarkably, the resultant nonlinearities often produce soft XOR functions.
Networks with these nonlinearities learn faster and perform better than conventional ReLU nonlinearities with matched parameter counts.
arXiv Detail & Related papers (2021-10-13T17:06:20Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Manifold GPLVMs for discovering non-Euclidean latent structure in neural
data [5.949779668853555]
A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables.
Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation.
arXiv Detail & Related papers (2020-06-12T19:08:54Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.