Structure of activity in multiregion recurrent neural networks
- URL: http://arxiv.org/abs/2402.12188v3
- Date: Wed, 08 Jan 2025 17:50:03 GMT
- Title: Structure of activity in multiregion recurrent neural networks
- Authors: David G. Clark, Manuel Beiran,
- Abstract summary: We study recurrent neural networks with multiple regions, each containing neurons with random and structured connections.<n>Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing.<n>We show that regions act as both generators and transmitters of activity -- roles that are often in tension.
- Score: 1.8416014644193066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural circuits comprise multiple interconnected regions, each with complex dynamics. The interplay between local and global activity is thought to underlie computational flexibility, yet the structure of multiregion neural activity and its origins in synaptic connectivity remain poorly understood. We investigate recurrent neural networks with multiple regions, each containing neurons with random and structured connections. Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing. These networks exhibit high-dimensional fluctuations within regions and low-dimensional signal transmission between them. Using dynamical mean-field theory, with cross-region currents as order parameters, we show that regions act as both generators and transmitters of activity -- roles that are often in tension. Taming within-region activity can be crucial for effective signal routing. Unlike previous models that suppressed neural activity to control signal flow, our model achieves routing by exciting different high-dimensional activity patterns through connectivity structure and nonlinear dynamics. Our analysis offers insights into multiregion neural data and trained neural networks.
Related papers
- Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Connectivity structure and dynamics of nonlinear recurrent neural networks [46.62658917638706]
We develop a theory to analyze how structure in connectivity shapes the high-dimensional, internally generated activity of neural networks.
Our theory provides tools to relate neural-network architecture and collective dynamics in artificial and biological systems.
arXiv Detail & Related papers (2024-09-03T15:08:37Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - The Evolution of the Interplay Between Input Distributions and Linear
Regions in Networks [20.97553518108504]
We count the number of linear convex regions in deep neural networks based on ReLU.
In particular, we prove that for any one-dimensional input, there exists a minimum threshold for the number of neurons required to express it.
We also unveil the iterative refinement process of decision boundaries in ReLU networks during training.
arXiv Detail & Related papers (2023-10-28T15:04:53Z) - Decomposing spiking neural networks with Graphical Neural Activity
Threads [0.734084539365505]
We introduce techniques for analyzing spiking neural networks that decompose neural activity into multiple, disjoint, parallel threads of activity.
We find that this graph of spiking activity naturally decomposes into disjoint connected components that overlap in space and time.
We provide an efficient algorithm for finding analogous threads that reoccur in large spiking datasets, revealing that seemingly distinct spike trains are composed of similar underlying threads of activity.
arXiv Detail & Related papers (2023-06-29T05:10:11Z) - Equivalence of Additive and Multiplicative Coupling in Spiking Neural
Networks [0.0]
Spiking neural network models characterize the emergent collective dynamics of circuits of biological neurons.
We show that spiking neural network models with additive coupling are equivalent to models with multiplicative coupling.
arXiv Detail & Related papers (2023-03-31T20:19:11Z) - Dimension of activity in random neural networks [6.752538702870792]
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units.
We calculate cross covariances self-consistently via a two-site cavity DMFT.
Our formulae apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings.
arXiv Detail & Related papers (2022-07-25T17:38:21Z) - Input correlations impede suppression of chaos and learning in balanced
rate networks [58.720142291102135]
Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity.
We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, strongly depends on correlations in the input.
arXiv Detail & Related papers (2022-01-24T19:20:49Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Rich dynamics caused by known biological brain network features
resulting in stateful networks [0.0]
Internal state of a neuron/network becomes a defining factor for how information is represented within the network.
In this study we assessed the impact of varying specific intrinsic parameters of the neurons that enriched network state dynamics.
We found such effects were more profound in sparsely connected networks than in densely connected networks.
arXiv Detail & Related papers (2021-06-03T08:32:43Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - Training spiking neural networks using reinforcement learning [0.0]
We propose biologically-plausible alternatives to backpropagation to facilitate the training of spiking neural networks.
We focus on investigating the candidacy of reinforcement learning rules in solving the spatial and temporal credit assignment problems.
We compare and contrast the two approaches by applying them to traditional RL domains such as gridworld, cartpole and mountain car.
arXiv Detail & Related papers (2020-05-12T17:40:36Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.