Stochastic resonance neurons in artificial neural networks
- URL: http://arxiv.org/abs/2205.10122v1
- Date: Fri, 6 May 2022 18:42:36 GMT
- Title: Stochastic resonance neurons in artificial neural networks
- Authors: Egor Manuylovich, Diego Arg\"uello Ron, Morteza Kamalian-Kopae, Sergei
Turitsyn
- Abstract summary: We propose a new type of neural networks using resonances as an inherent part of the architecture.
We show that such a neural network is more robust against the impact of noise.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many modern applications of the artificial neural networks ensue large number
of layers making traditional digital implementations increasingly complex.
Optical neural networks offer parallel processing at high bandwidth, but have
the challenge of noise accumulation. We propose here a new type of neural
networks using stochastic resonances as an inherent part of the architecture
and demonstrate a possibility of significant reduction of the required number
of neurons for a given performance accuracy. We also show that such a neural
network is more robust against the impact of noise.
Related papers
- Impact of white noise in artificial neural networks trained for classification: performance and noise mitigation strategies [0.0]
We consider how additive and multiplicative Gaussian white noise on the neuronal level can affect the accuracy of the network.
We adapt several noise reduction techniques to the essential setting of classification tasks.
arXiv Detail & Related papers (2024-11-07T01:21:12Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Expressivity of Spiking Neural Networks [15.181458163440634]
We study the capabilities of spiking neural networks where information is encoded in the firing time of neurons.
In contrast to ReLU networks, we prove that spiking neural networks can realize both continuous and discontinuous functions.
arXiv Detail & Related papers (2023-08-16T08:45:53Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Noise mitigation strategies in physical feedforward neural networks [0.0]
Physical neural networks are promising candidates for next generation artificial intelligence hardware.
We introduce connectivity topologies, ghost neurons as well as pooling as noise mitigation strategies.
We demonstrate the effectiveness of the combined methods based on a fully trained neural network classifying the MNIST handwritten digits.
arXiv Detail & Related papers (2022-04-20T13:51:46Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Understanding and mitigating noise in trained deep neural networks [0.0]
We study the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers.
We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond a limit.
We identify criteria allowing engineers to design noise-resilient novel neural network hardware.
arXiv Detail & Related papers (2021-03-12T17:16:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.