Interneurons accelerate learning dynamics in recurrent neural networks
for statistical adaptation
- URL: http://arxiv.org/abs/2209.10634v2
- Date: Thu, 24 Aug 2023 13:46:05 GMT
- Title: Interneurons accelerate learning dynamics in recurrent neural networks
for statistical adaptation
- Authors: David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii
- Abstract summary: We study the benefits of mediating recurrent communication via interneurons compared with direct recurrent connections.
Our results suggest interneurons are useful for rapid adaptation to changing input statistics.
- Score: 39.245842636392865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Early sensory systems in the brain rapidly adapt to fluctuating input
statistics, which requires recurrent communication between neurons.
Mechanistically, such recurrent communication is often indirect and mediated by
local interneurons. In this work, we explore the computational benefits of
mediating recurrent communication via interneurons compared with direct
recurrent connections. To this end, we consider two mathematically tractable
recurrent linear neural networks that statistically whiten their inputs -- one
with direct recurrent connections and the other with interneurons that mediate
recurrent communication. By analyzing the corresponding continuous synaptic
dynamics and numerically simulating the networks, we show that the network with
interneurons is more robust to initialization than the network with direct
recurrent connections in the sense that the convergence time for the synaptic
dynamics in the network with interneurons (resp. direct recurrent connections)
scales logarithmically (resp. linearly) with the spectrum of their
initialization. Our results suggest that interneurons are computationally
useful for rapid adaptation to changing input statistics. Interestingly, the
network with interneurons is an overparameterized solution of the whitening
objective for the network with direct recurrent connections, so our results can
be viewed as a recurrent linear neural network analogue of the implicit
acceleration phenomenon observed in overparameterized feedforward linear neural
networks.
Related papers
- Prospective Messaging: Learning in Networks with Communication Delays [12.63723517446906]
Inter-neuron communication delays are ubiquitous in physically realized neural networks.
We show that delays prevent state-of-the-art continuous-time neural networks from learning even simple tasks.
We then propose to compensate for communication delays by predicting future signals based on currently available ones.
arXiv Detail & Related papers (2024-07-07T20:54:14Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Neural Operator Learning for Long-Time Integration in Dynamical Systems with Recurrent Neural Networks [1.6874375111244329]
Deep neural networks offer reduced computational costs during inference and can be trained directly from observational data.
Existing methods, however, cannot extrapolate accurately and are prone to error accumulation in long-time integration.
We address this issue by combining neural operators with recurrent neural networks, learning the operator mapping, while offering a recurrent structure to capture temporal dependencies.
arXiv Detail & Related papers (2023-03-03T22:19:23Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning in Feedforward Neural Networks Accelerated by Transfer Entropy [0.0]
The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series)
Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks.
We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance.
arXiv Detail & Related papers (2021-04-29T19:07:07Z) - Implicit recurrent networks: A novel approach to stationary input
processing with recurrent neural networks in deep learning [0.0]
In this work, we introduce and test a novel implementation of recurrent neural networks into deep learning.
We provide an algorithm which implements the backpropagation algorithm on a implicit implementation of recurrent networks.
A single-layer implicit recurrent network is able to solve the XOR problem, while a feed-forward network with monotonically increasing activation function fails at this task.
arXiv Detail & Related papers (2020-10-20T18:55:32Z) - Online neural connectivity estimation with ensemble stimulation [5.156484100374058]
We propose a method based on noisy group testing that drastically increases the efficiency of this process in sparse networks.
We show that it is possible to recover binarized network connectivity with a number of tests that grows only logarithmically with population size.
We also demonstrate the feasibility of inferring connectivity for networks of up to tens of thousands of neurons online.
arXiv Detail & Related papers (2020-07-27T23:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.