Manifolds and Modules: How Function Develops in a Neural Foundation Model
- URL: http://arxiv.org/abs/2512.07869v1
- Date: Wed, 26 Nov 2025 20:36:47 GMT
- Title: Manifolds and Modules: How Function Develops in a Neural Foundation Model
- Authors: Johannes Bertram, Luciano Dyballa, T. Anderson Keller, Savik Kinger, Steven W. Zucker,
- Abstract summary: We present a foundation model of neural activity, characterizing each 'neuron' based on its temporal response properties to parametric stimuli.<n>We find that the different processing stages of the model each exhibit qualitatively different representational structures.<n>Overall, we present this work as a study of the inner workings of a prominent neural foundation model, gaining insights into the biological relevance of its internals.
- Score: 5.518605965321172
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation models have shown remarkable success in fitting biological visual systems; however, their black-box nature inherently limits their utility for understanding brain function. Here, we peek inside a SOTA foundation model of neural activity (Wang et al., 2025) as a physiologist might, characterizing each 'neuron' based on its temporal response properties to parametric stimuli. We analyze how different stimuli are represented in neural activity space by building decoding manifolds, and we analyze how different neurons are represented in stimulus-response space by building neural encoding manifolds. We find that the different processing stages of the model (i.e., the feedforward encoder, recurrent, and readout modules) each exhibit qualitatively different representational structures in these manifolds. The recurrent module shows a jump in capabilities over the encoder module by 'pushing apart' the representations of different temporal stimulus patterns; while the readout module achieves biological fidelity by using numerous specialized feature maps rather than biologically plausible mechanisms. Overall, we present this work as a study of the inner workings of a prominent neural foundation model, gaining insights into the biological relevance of its internals through the novel analysis of its neurons' joint temporal response patterns.
Related papers
- Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence [55.07411490538404]
We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM)<n>IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors.
arXiv Detail & Related papers (2025-11-13T09:28:41Z) - State Space Models Naturally Produce Traveling Waves, Time Cells, and Scale to Abstract Cognitive Functions [7.097247619177705]
We propose a framework based on State-Space Models (SSMs), an emerging class of deep learning architectures.<n>We demonstrate that the model spontaneously develops neural representations that strikingly mimic biological 'time cells'<n>Our findings position SSMs as a compelling framework that connects single-neuron dynamics to cognitive phenomena.
arXiv Detail & Related papers (2025-07-18T03:53:16Z) - NOBLE -- Neural Operator with Biologically-informed Latent Embeddings to Capture Experimental Variability in Biological Neuron Models [63.592664795493725]
NOBLE is a neural operator framework that learns a mapping from a continuous frequency-modulated embedding of interpretable neuron features to the somatic voltage response induced by current injection.<n>It predicts distributions of neural dynamics accounting for the intrinsic experimental variability.<n>NOBLE is the first scaled-up deep learning framework that validates its generalization with real experimental data.
arXiv Detail & Related papers (2025-06-05T01:01:18Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Modularity in Transformers: Investigating Neuron Separability & Specialization [0.0]
Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited.
This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B) models.
Using a combination of selective pruning and MoEfication clustering techniques, we analyze the overlap and specialization of neurons across different tasks and data subsets.
arXiv Detail & Related papers (2024-08-30T14:35:01Z) - CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations [3.3713037259290255]
Current analysis methods often fail to harness the richness of such data.<n> CREIMBO identifies the hidden composition of per-session neural ensembles through graph-driven dictionary learning.<n>We demonstrate CREIMBO's ability to recover true components in synthetic data.
arXiv Detail & Related papers (2024-05-27T17:48:32Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution [51.333918985340425]
We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
arXiv Detail & Related papers (2022-05-21T14:08:53Z) - How to build a cognitive map: insights from models of the hippocampal
formation [0.45880283710344055]
The concept of a cognitive map has emerged as one of the leading metaphors for these capacities.
unravelling the learning and neural representation of such a map has become a central focus of neuroscience.
arXiv Detail & Related papers (2022-02-03T16:49:37Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.