Excitatory/Inhibitory Balance Emerges as a Key Factor for RBN
Performance, Overriding Attractor Dynamics
- URL: http://arxiv.org/abs/2308.10831v1
- Date: Wed, 2 Aug 2023 17:41:58 GMT
- Title: Excitatory/Inhibitory Balance Emerges as a Key Factor for RBN
Performance, Overriding Attractor Dynamics
- Authors: Emmanuel Calvet, Jean Rouat, Bertrand Reulet
- Abstract summary: Reservoir computing provides a time and cost-efficient alternative to traditional learning methods.
We show that specific distribution parameters can lead to diverse dynamics near critical points.
We then evaluate performance in two challenging tasks, memorization and prediction, and find that a positive excitatory balance produces a critical point with higher memory performance.
- Score: 35.70635792124142
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reservoir computing provides a time and cost-efficient alternative to
traditional learning methods.Critical regimes, known as the "edge of chaos,"
have been found to optimize computational performance in binary neural
networks. However, little attention has been devoted to studying
reservoir-to-reservoir variability when investigating the link between
connectivity, dynamics, and performance. As physical reservoir computers become
more prevalent, developing a systematic approach to network design is crucial.
In this article, we examine Random Boolean Networks (RBNs) and demonstrate that
specific distribution parameters can lead to diverse dynamics near critical
points. We identify distinct dynamical attractors and quantify their
statistics, revealing that most reservoirs possess a dominant attractor. We
then evaluate performance in two challenging tasks, memorization and
prediction, and find that a positive excitatory balance produces a critical
point with higher memory performance. In comparison, a negative inhibitory
balance delivers another critical point with better prediction performance.
Interestingly, we show that the intrinsic attractor dynamics have little
influence on performance in either case.
Related papers
- Understanding the Functional Roles of Modelling Components in Spiking Neural Networks [9.448298335007465]
Spiking neural networks (SNNs) are promising in achieving high computational efficiency with biological fidelity.
We investigate the functional roles of key modelling components, leakage, reset, and recurrence, in leaky integrate-and-fire (LIF) based SNNs.
Specifically, we find that the leakage plays a crucial role in balancing memory retention and robustness, the reset mechanism is essential for uninterrupted temporal processing and computational efficiency, and the recurrence enriches the capability to model complex dynamics at a cost of robustness degradation.
arXiv Detail & Related papers (2024-03-25T12:13:20Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization [52.5587113539404]
We introduce a causality-aware entropy term that effectively identifies and prioritizes actions with high potential impacts for efficient exploration.
Our proposed algorithm, ACE: Off-policy Actor-critic with Causality-aware Entropy regularization, demonstrates a substantial performance advantage across 29 diverse continuous control tasks.
arXiv Detail & Related papers (2024-02-22T13:22:06Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Understanding Self-attention Mechanism via Dynamical System Perspective [58.024376086269015]
Self-attention mechanism (SAM) is widely used in various fields of artificial intelligence.
We show that intrinsic stiffness phenomenon (SP) in the high-precision solution of ordinary differential equations (ODEs) also widely exists in high-performance neural networks (NN)
We show that the SAM is also a stiffness-aware step size adaptor that can enhance the model's representational ability to measure intrinsic SP.
arXiv Detail & Related papers (2023-08-19T08:17:41Z) - Robustness of Energy Landscape Control to Dephasing [0.0]
We analyze the robustness of the fidelity error, as measured by the logarithmic sensitivity function, to dephasing processes.
We show that despite the different log sensitivity calculations employed in this study, both demonstrate that the log-sensitivity of the fidelity error to dephasing results in a conventional trade-off between performance and robustness.
arXiv Detail & Related papers (2023-03-10T01:51:47Z) - Optimal reservoir computers for forecasting systems of nonlinear
dynamics [0.0]
We show that reservoirs of low connectivity perform better than or as well as those of high connectivity in forecasting noiseless Lorenz and coupled Wilson-Cowan systems.
We also show that, unexpectedly, computationally effective reservoirs of unconnected nodes (RUN) outperform reservoirs of linked network topologies in predicting these systems.
arXiv Detail & Related papers (2022-02-09T09:36:31Z) - On the role of feedback in visual processing: a predictive coding
perspective [0.6193838300896449]
We consider deep convolutional networks (CNNs) as models of feed-forward visual processing and implement Predictive Coding (PC) dynamics.
We find that the network increasingly relies on top-down predictions as the noise level increases.
In addition, the accuracy of the network implementing PC dynamics significantly increases over time-steps, compared to its equivalent forward network.
arXiv Detail & Related papers (2021-06-08T10:07:23Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Self-Supervised Dynamic Networks for Covariate Shift Robustness [9.542023122304098]
Self-Supervised Dynamic Networks (SSDN) is an input-dependent mechanism that allows a self-supervised network to predict the weights of the main network.
We present the conceptual and empirical advantages of the proposed method on the problem of image classification.
arXiv Detail & Related papers (2020-06-06T19:37:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.