Structure of activity in multiregion recurrent neural networks
- URL: http://arxiv.org/abs/2402.12188v3
- Date: Wed, 08 Jan 2025 17:50:03 GMT
- Title: Structure of activity in multiregion recurrent neural networks
- Authors: David G. Clark, Manuel Beiran,
- Abstract summary: We study recurrent neural networks with multiple regions, each containing neurons with random and structured connections.
Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing.
We show that regions act as both generators and transmitters of activity -- roles that are often in tension.
- Score: 1.8416014644193066
- License:
- Abstract: Neural circuits comprise multiple interconnected regions, each with complex dynamics. The interplay between local and global activity is thought to underlie computational flexibility, yet the structure of multiregion neural activity and its origins in synaptic connectivity remain poorly understood. We investigate recurrent neural networks with multiple regions, each containing neurons with random and structured connections. Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing. These networks exhibit high-dimensional fluctuations within regions and low-dimensional signal transmission between them. Using dynamical mean-field theory, with cross-region currents as order parameters, we show that regions act as both generators and transmitters of activity -- roles that are often in tension. Taming within-region activity can be crucial for effective signal routing. Unlike previous models that suppressed neural activity to control signal flow, our model achieves routing by exciting different high-dimensional activity patterns through connectivity structure and nonlinear dynamics. Our analysis offers insights into multiregion neural data and trained neural networks.
Related papers
- Connectivity structure and dynamics of nonlinear recurrent neural networks [46.62658917638706]
We develop a theory to analyze how structure in connectivity shapes the high-dimensional, internally generated activity of neural networks.
Our theory provides tools to relate neural-network architecture and collective dynamics in artificial and biological systems.
arXiv Detail & Related papers (2024-09-03T15:08:37Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - The Evolution of the Interplay Between Input Distributions and Linear
Regions in Networks [20.97553518108504]
We count the number of linear convex regions in deep neural networks based on ReLU.
In particular, we prove that for any one-dimensional input, there exists a minimum threshold for the number of neurons required to express it.
We also unveil the iterative refinement process of decision boundaries in ReLU networks during training.
arXiv Detail & Related papers (2023-10-28T15:04:53Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Decomposing spiking neural networks with Graphical Neural Activity
Threads [0.734084539365505]
We introduce techniques for analyzing spiking neural networks that decompose neural activity into multiple, disjoint, parallel threads of activity.
We find that this graph of spiking activity naturally decomposes into disjoint connected components that overlap in space and time.
We provide an efficient algorithm for finding analogous threads that reoccur in large spiking datasets, revealing that seemingly distinct spike trains are composed of similar underlying threads of activity.
arXiv Detail & Related papers (2023-06-29T05:10:11Z) - Dimension of activity in random neural networks [6.752538702870792]
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units.
We calculate cross covariances self-consistently via a two-site cavity DMFT.
Our formulae apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings.
arXiv Detail & Related papers (2022-07-25T17:38:21Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.