Structure of activity in multiregion recurrent neural networks
- URL: http://arxiv.org/abs/2402.12188v2
- Date: Tue, 20 Feb 2024 17:32:32 GMT
- Title: Structure of activity in multiregion recurrent neural networks
- Authors: David G. Clark, Manuel Beiran
- Abstract summary: We study the dynamics of neural networks with multiple interconnected regions.
Within each region, neurons have a combination of random and structured recurrent connections.
We show that taming the complexity of activity within a region is necessary for it to route signals to and from other regions.
- Score: 2.1756081703276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural circuits are composed of multiple regions, each with rich dynamics and
engaging in communication with other regions. The combination of local,
within-region dynamics and global, network-level dynamics is thought to provide
computational flexibility. However, the nature of such multiregion dynamics and
the underlying synaptic connectivity patterns remain poorly understood. Here,
we study the dynamics of recurrent neural networks with multiple interconnected
regions. Within each region, neurons have a combination of random and
structured recurrent connections. Motivated by experimental evidence of
communication subspaces between cortical areas, these networks have low-rank
connectivity between regions, enabling selective routing of activity. These
networks exhibit two interacting forms of dynamics: high-dimensional
fluctuations within regions and low-dimensional signal transmission between
regions. To characterize this interaction, we develop a dynamical mean-field
theory to analyze such networks in the limit where each region contains
infinitely many neurons, with cross-region currents as key order parameters.
Regions can act as both generators and transmitters of activity, roles that we
show are in conflict. Specifically, taming the complexity of activity within a
region is necessary for it to route signals to and from other regions. Unlike
previous models of routing in neural circuits, which suppressed the activities
of neuronal groups to control signal flow, routing in our model is achieved by
exciting different high-dimensional activity patterns through a combination of
connectivity structure and nonlinear recurrent dynamics. This theory provides
insight into the interpretation of both multiregion neural data and trained
neural networks.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Decomposing spiking neural networks with Graphical Neural Activity
Threads [0.734084539365505]
We introduce techniques for analyzing spiking neural networks that decompose neural activity into multiple, disjoint, parallel threads of activity.
We find that this graph of spiking activity naturally decomposes into disjoint connected components that overlap in space and time.
We provide an efficient algorithm for finding analogous threads that reoccur in large spiking datasets, revealing that seemingly distinct spike trains are composed of similar underlying threads of activity.
arXiv Detail & Related papers (2023-06-29T05:10:11Z) - Equivalence of Additive and Multiplicative Coupling in Spiking Neural
Networks [0.0]
Spiking neural network models characterize the emergent collective dynamics of circuits of biological neurons.
We show that spiking neural network models with additive coupling are equivalent to models with multiplicative coupling.
arXiv Detail & Related papers (2023-03-31T20:19:11Z) - Dimension of activity in random neural networks [6.752538702870792]
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units.
We calculate cross covariances self-consistently via a two-site cavity DMFT.
Our formulae apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings.
arXiv Detail & Related papers (2022-07-25T17:38:21Z) - Input correlations impede suppression of chaos and learning in balanced
rate networks [58.720142291102135]
Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity.
We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, strongly depends on correlations in the input.
arXiv Detail & Related papers (2022-01-24T19:20:49Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Rich dynamics caused by known biological brain network features
resulting in stateful networks [0.0]
Internal state of a neuron/network becomes a defining factor for how information is represented within the network.
In this study we assessed the impact of varying specific intrinsic parameters of the neurons that enriched network state dynamics.
We found such effects were more profound in sparsely connected networks than in densely connected networks.
arXiv Detail & Related papers (2021-06-03T08:32:43Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z) - Training spiking neural networks using reinforcement learning [0.0]
We propose biologically-plausible alternatives to backpropagation to facilitate the training of spiking neural networks.
We focus on investigating the candidacy of reinforcement learning rules in solving the spatial and temporal credit assignment problems.
We compare and contrast the two approaches by applying them to traditional RL domains such as gridworld, cartpole and mountain car.
arXiv Detail & Related papers (2020-05-12T17:40:36Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.