Input correlations impede suppression of chaos and learning in balanced
rate networks
- URL: http://arxiv.org/abs/2201.09916v1
- Date: Mon, 24 Jan 2022 19:20:49 GMT
- Title: Input correlations impede suppression of chaos and learning in balanced
rate networks
- Authors: Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L.
F. Abbott
- Abstract summary: Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity.
We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, strongly depends on correlations in the input.
- Score: 58.720142291102135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural circuits exhibit complex activity patterns, both spontaneously and
evoked by external stimuli. Information encoding and learning in neural
circuits depend on how well time-varying stimuli can control spontaneous
network activity. We show that in firing-rate networks in the balanced state,
external control of recurrent dynamics, i.e., the suppression of
internally-generated chaotic variability, strongly depends on correlations in
the input. A unique feature of balanced networks is that, because common
external input is dynamically canceled by recurrent feedback, it is far easier
to suppress chaos with independent inputs into each neuron than through common
input. To study this phenomenon we develop a non-stationary dynamic mean-field
theory that determines how the activity statistics and largest Lyapunov
exponent depend on frequency and amplitude of the input, recurrent coupling
strength, and network size, for both common and independent input. We also show
that uncorrelated inputs facilitate learning in balanced networks.
Related papers
- Identifying the impact of local connectivity patterns on dynamics in excitatory-inhibitory networks [4.913318028439159]
We show that a particular pattern of connectivity, chain motifs, have a much stronger impact on dominant eigenmodes than other pairwise motifs.
An overrepresentation of chain motifs induces a strong positive eigenvalue in inhibition-dominated networks.
These findings have direct implications for the interpretation of experiments in which responses to optogenetic perturbations are measured and used to infer the dynamical regime of cortical circuits.
arXiv Detail & Related papers (2024-11-11T08:57:44Z) - Heterogeneous quantization regularizes spiking neural network activity [0.0]
We present a data-blind neuromorphic signal conditioning strategy whereby analog data are normalized and quantized into spike phase representations.
We extend this mechanism by adding a data-aware calibration step whereby the range and density of the quantization weights adapt to accumulated input statistics.
arXiv Detail & Related papers (2024-09-27T02:25:44Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Uncovering the Origins of Instability in Dynamical Systems: How
Attention Mechanism Can Help? [0.0]
We show that attention should be directed toward the collective behaviour of imbalanced structures and polarity-driven structural instabilities within the network.
Our study provides a proof of concept to understand why perturbing some nodes of a network may cause dramatic changes in the network dynamics.
arXiv Detail & Related papers (2022-12-19T17:16:41Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Residual networks classify inputs based on their neural transient
dynamics [0.0]
We show analytically that there is a cooperation and competition dynamics between residuals corresponding to each input dimension.
In cases where residuals do not converge to an attractor state, their internal dynamics are separable for each input class, and the network can reliably approximate the output.
arXiv Detail & Related papers (2021-01-08T13:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.