Impact of white Gaussian internal noise on analog echo-state neural networks
- URL: http://arxiv.org/abs/2405.07670v1
- Date: Mon, 13 May 2024 11:59:20 GMT
- Title: Impact of white Gaussian internal noise on analog echo-state neural networks
- Authors: Nadezhda Semenova,
- Abstract summary: This paper studies the influence of noise on the functioning of recurrent networks using the example of trained echo state networks (ESNs)
We show that the propagation of noise in reservoir is mainly controlled by the statistical properties of the output connection matrix.
We also show that there are conditions under which even noise with an intensity of $10-20$ is already enough to completely lose the useful signal.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, more and more works have appeared devoted to the analog (hardware) implementation of artificial neural networks, in which neurons and the connection between them are based not on computer calculations, but on physical principles. Such networks offer improved energy efficiency and, in some cases, scalability, but may be susceptible to internal noise. This paper studies the influence of noise on the functioning of recurrent networks using the example of trained echo state networks (ESNs). The most common reservoir connection matrices were chosen as various topologies of ESNs: random uniform and band matrices with different connectivity. White Gaussian noise was chosen as the influence, and according to the way of its introducing it was additive or multiplicative, as well as correlated or uncorrelated. In the paper, we show that the propagation of noise in reservoir is mainly controlled by the statistical properties of the output connection matrix, namely the mean and the mean square. Depending on these values, more correlated or uncorrelated noise accumulates in the network. We also show that there are conditions under which even noise with an intensity of $10^{-20}$ is already enough to completely lose the useful signal. In the article we show which types of noise are most critical for networks with different activation functions (hyperbolic tangent, sigmoid and linear) and if the network is self-closed.
Related papers
- Impact of internal noise on convolutional neural networks [0.0]
We study the impact of noise on a simplified trained convolutional network.<n>The propagation of uncorrelated noise depends on the statistical properties of the connection matrix.<n>An analysis of the noise level in the network's output signal shows a strong correlation with the results of numerical simulations.
arXiv Detail & Related papers (2025-05-10T11:49:37Z) - Internal noise in hardware deep and recurrent neural networks helps with learning [0.0]
Internal noise during the training of neural networks affects the final performance of recurrent and deep neural networks.
In most cases, both deep and echo state networks benefit from internal noise during training, as it enhances their resilience to noise.
arXiv Detail & Related papers (2025-04-18T16:26:46Z) - Impact of white noise in artificial neural networks trained for classification: performance and noise mitigation strategies [0.0]
We consider how additive and multiplicative Gaussian white noise on the neuronal level can affect the accuracy of the network.
We adapt several noise reduction techniques to the essential setting of classification tasks.
arXiv Detail & Related papers (2024-11-07T01:21:12Z) - Using Convolutional Neural Networks for Denoising and Deblending of Marine Seismic Data [1.6411821807321063]
We are using deep convolutional neural networks (CNNs) to remove seismic interference noise and to deblend seismic data.
Deblending in common channel domain with the use of a CNN yields relatively good results and is an improvement compared to shot domain.
arXiv Detail & Related papers (2024-09-13T07:35:30Z) - Non Commutative Convolutional Signal Models in Neural Networks:
Stability to Small Deformations [111.27636893711055]
We study the filtering and stability properties of non commutative convolutional filters.
Our results have direct implications for group neural networks, multigraph neural networks and quaternion neural networks.
arXiv Detail & Related papers (2023-10-05T20:27:22Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Noise impact on recurrent neural network with linear activation function [0.0]
We study the peculiarities of internal noise propagation in recurrent ANN on the example of echo state network (ESN)
Here we consider the case when artificial neurons have linear activation function with different slope coefficients.
We have found that the general view of variance and signal-to-noise ratio of ESN output signal is similar to only one neuron.
arXiv Detail & Related papers (2023-03-23T13:43:05Z) - Computational Complexity of Learning Neural Networks: Smoothness and
Degeneracy [52.40331776572531]
We show that learning depth-$3$ ReLU networks under the Gaussian input distribution is hard even in the smoothed-analysis framework.
Our results are under a well-studied assumption on the existence of local pseudorandom generators.
arXiv Detail & Related papers (2023-02-15T02:00:26Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Understanding and mitigating noise in trained deep neural networks [0.0]
We study the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers.
We find that noise accumulation is generally bound, and adding additional network layers does not worsen the signal to noise ratio beyond a limit.
We identify criteria allowing engineers to design noise-resilient novel neural network hardware.
arXiv Detail & Related papers (2021-03-12T17:16:26Z) - Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections [73.95786440318369]
We focus on the so-called implicit effect' of GNIs, which is the effect of the injected noise on the dynamics of gradient descent (SGD)
We show that this effect induces an asymmetric heavy-tailed noise on gradient updates.
We then formally prove that GNIs induce an implicit bias', which varies depending on the heaviness of the tails and the level of asymmetry.
arXiv Detail & Related papers (2021-02-13T21:28:09Z) - Input Similarity from the Neural Network Perspective [7.799648230758492]
A neural network trained on a dataset with noisy labels reaches almost perfect accuracy.
We show how to use a similarity measure to estimate sample density.
We also propose to enforce that examples known to be similar should also be seen as similar by the network.
arXiv Detail & Related papers (2021-02-10T04:57:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.