Growing Brains: Co-emergence of Anatomical and Functional Modularity in
Recurrent Neural Networks
- URL: http://arxiv.org/abs/2310.07711v1
- Date: Wed, 11 Oct 2023 17:58:25 GMT
- Title: Growing Brains: Co-emergence of Anatomical and Functional Modularity in
Recurrent Neural Networks
- Authors: Ziming Liu, Mikail Khona, Ila R. Fiete, Max Tegmark
- Abstract summary: Recurrent neural networks (RNNs) trained on compositional tasks can exhibit functional modularity.
We apply a recent machine learning method, brain-inspired modular training, to a network being trained to solve a set of compositional cognitive tasks.
We find that functional and anatomical clustering emerge together, such that functionally similar neurons also become spatially localized and interconnected.
- Score: 18.375521792153112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent neural networks (RNNs) trained on compositional tasks can exhibit
functional modularity, in which neurons can be clustered by activity similarity
and participation in shared computational subtasks. Unlike brains, these RNNs
do not exhibit anatomical modularity, in which functional clustering is
correlated with strong recurrent coupling and spatial localization of
functional clusters. Contrasting with functional modularity, which can be
ephemerally dependent on the input, anatomically modular networks form a robust
substrate for solving the same subtasks in the future. To examine whether it is
possible to grow brain-like anatomical modularity, we apply a recent machine
learning method, brain-inspired modular training (BIMT), to a network being
trained to solve a set of compositional cognitive tasks. We find that
functional and anatomical clustering emerge together, such that functionally
similar neurons also become spatially localized and interconnected. Moreover,
compared to standard $L_1$ or no regularization settings, the model exhibits
superior performance by optimally balancing task performance and network
sparsity. In addition to achieving brain-like organization in RNNs, our
findings also suggest that BIMT holds promise for applications in neuromorphic
computing and enhancing the interpretability of neural network architectures.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Spatial embedding promotes a specific form of modularity with low entropy and heterogeneous spectral dynamics [0.0]
Spatially embedded recurrent neural networks provide a promising avenue to study how modelled constraints shape the combined structural and functional organisation of networks over learning.
We show that it is possible to study these restrictions through entropic measures of the neural weights and eigenspectrum, across both rate and spiking neural networks.
This work deepens our understanding of constrained learning in neural networks, across coding schemes and tasks, where solutions to simultaneous structural and functional objectives must be accomplished in tandem.
arXiv Detail & Related papers (2024-09-26T10:00:05Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Modular Growth of Hierarchical Networks: Efficient, General, and Robust Curriculum Learning [0.0]
We show that for a given classical, non-modular recurrent neural network (RNN), an equivalent modular network will perform better across multiple metrics.
We demonstrate that the inductive bias introduced by the modular topology is strong enough for the network to perform well even when the connectivity within modules is fixed.
Our findings suggest that gradual modular growth of RNNs could provide advantages for learning increasingly complex tasks on evolutionary timescales.
arXiv Detail & Related papers (2024-06-10T13:44:07Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Transformer-Based Hierarchical Clustering for Brain Network Analysis [13.239896897835191]
We propose a novel interpretable transformer-based model for joint hierarchical cluster identification and brain network classification.
With the help of hierarchical clustering, the model achieves increased accuracy and reduced runtime complexity while providing plausible insight into the functional organization of brain regions.
arXiv Detail & Related papers (2023-05-06T22:14:13Z) - Seeing is Believing: Brain-Inspired Modular Training for Mechanistic
Interpretability [5.15188009671301]
Brain-Inspired Modular Training is a method for making neural networks more modular and interpretable.
BIMT embeds neurons in a geometric space and augments the loss function with a cost proportional to the length of each neuron connection.
arXiv Detail & Related papers (2023-05-04T17:56:42Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - A Graph Neural Network Framework for Causal Inference in Brain Networks [0.3392372796177108]
A central question in neuroscience is how self-organizing dynamic interactions in the brain emerge on their relatively static backbone.
We present a graph neural network (GNN) framework to describe functional interactions based on structural anatomical layout.
We show that GNNs are able to capture long-term dependencies in data and also scale up to the analysis of large-scale networks.
arXiv Detail & Related papers (2020-10-14T15:01:21Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.