Input correlations impede suppression of chaos and learning in balanced
rate networks
- URL: http://arxiv.org/abs/2201.09916v1
- Date: Mon, 24 Jan 2022 19:20:49 GMT
- Title: Input correlations impede suppression of chaos and learning in balanced
rate networks
- Authors: Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L.
F. Abbott
- Abstract summary: Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity.
We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, strongly depends on correlations in the input.
- Score: 58.720142291102135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural circuits exhibit complex activity patterns, both spontaneously and
evoked by external stimuli. Information encoding and learning in neural
circuits depend on how well time-varying stimuli can control spontaneous
network activity. We show that in firing-rate networks in the balanced state,
external control of recurrent dynamics, i.e., the suppression of
internally-generated chaotic variability, strongly depends on correlations in
the input. A unique feature of balanced networks is that, because common
external input is dynamically canceled by recurrent feedback, it is far easier
to suppress chaos with independent inputs into each neuron than through common
input. To study this phenomenon we develop a non-stationary dynamic mean-field
theory that determines how the activity statistics and largest Lyapunov
exponent depend on frequency and amplitude of the input, recurrent coupling
strength, and network size, for both common and independent input. We also show
that uncorrelated inputs facilitate learning in balanced networks.
Related papers
- On the dynamics of convolutional recurrent neural networks near their critical point [0.0]
We study the dynamical properties of a single-layer convolutional recurrent network with a smooth sigmoidal activation function.
We present analytical solutions for the steady states when the network is forced with a single oscillation.
We derive the relationships shaping the value of the temporal decay and spatial propagation length as a function of this background value.
arXiv Detail & Related papers (2024-05-22T17:29:12Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Uncovering the Origins of Instability in Dynamical Systems: How
Attention Mechanism Can Help? [0.0]
We show that attention should be directed toward the collective behaviour of imbalanced structures and polarity-driven structural instabilities within the network.
Our study provides a proof of concept to understand why perturbing some nodes of a network may cause dramatic changes in the network dynamics.
arXiv Detail & Related papers (2022-12-19T17:16:41Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons [0.7340017786387767]
We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
arXiv Detail & Related papers (2021-10-27T16:15:55Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Residual networks classify inputs based on their neural transient
dynamics [0.0]
We show analytically that there is a cooperation and competition dynamics between residuals corresponding to each input dimension.
In cases where residuals do not converge to an attractor state, their internal dynamics are separable for each input class, and the network can reliably approximate the output.
arXiv Detail & Related papers (2021-01-08T13:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.