Neural Functional Alignment Space: Brain-Referenced Representation of Artificial Neural Networks
- URL: http://arxiv.org/abs/2603.00793v1
- Date: Sat, 28 Feb 2026 19:48:27 GMT
- Title: Neural Functional Alignment Space: Brain-Referenced Representation of Artificial Neural Networks
- Authors: Ruiyu Yan, Hanqi Jiang, Yi Pan, Xiaobo Li, Tianming Liu, Xi Jiang, Lin Zhao,
- Abstract summary: We propose a brain-referenced framework for characterizing artificial neural networks on equal functional grounds.<n>NFAS departs from conventional alignment approaches by modeling the intrinsic dynamical evolution of stimulus representations across network depth.<n>Across 45 pretrained models spanning vision, audio, and language, NFAS reveals structured organization within this brain-referenced space.
- Score: 15.491307802291836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the Neural Functional Alignment Space (NFAS), a brain-referenced representational framework for characterizing artificial neural networks on equal functional grounds. NFAS departs from conventional alignment approaches that rely on layer-wise features or task-specific activations by modeling the intrinsic dynamical evolution of stimulus representations across network depth. Specifically, we model layer-wise embeddings as a depth-wise dynamical trajectory and apply Dynamic Mode Decomposition (DMD) to extract the stable mode. This representation is then projected into a biologically anchored coordinate system defined by distributed neural responses. We also introduce the Signal-to-Noise Consistency Index (SNCI) to quantify cross-model consistency at the modality level. Across 45 pretrained models spanning vision, audio, and language, NFAS reveals structured organization within this brain-referenced space, including modality-specific clustering and cross-modal convergence in integrative cortical systems. Our findings suggest that representation dynamics provide a principled basis for
Related papers
- Neuronal Group Communication for Efficient Neural representation [85.36421257648294]
This paper addresses the question of how to build large neural systems that learn efficient, modular, and interpretable representations.<n>We propose Neuronal Group Communication (NGC), a theory-driven framework that reimagines a neural network as a dynamical system of interacting neuronal groups.<n>NGC treats weights as transient interactions between embedding-like neuronal states, with neural computation unfolding through iterative communication among groups of neurons.
arXiv Detail & Related papers (2025-10-19T14:23:35Z) - Data-Efficient Neural Training with Dynamic Connectomes [1.2260914111581283]
We introduce a novel approach to characterize training dynamics in neural networks by representing evolving neural activations as functional connectomes.<n>Our results show that these signatures effectively capture key transitions in the functional organization of the network.
arXiv Detail & Related papers (2025-08-09T04:32:23Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Biologically Inspired Spiking Diffusion Model with Adaptive Lateral Selection Mechanism [5.135901078097114]
We develop a novel diffusion model based on spiking neural networks (SNNs)<n>We leverage this spiking inner loop alongside a lateral connection mechanism to iteratively refine the substructure selection network.<n>Our model consistently surpasses state-of-the-art SNN-based generative models across multiple benchmark datasets.
arXiv Detail & Related papers (2025-03-31T06:31:50Z) - IP$^{2}$-RSNN: Bi-level Intrinsic Plasticity Enables Learning-to-learn in Recurrent Spiking Neural Networks [20.88195975299024]
We develop a recurrent spiking neural network with bi-level intrinsic plasticity (IP$2$-RSNN)<n>Our results indicate that the proposed bi-level intrinsic plasticity plays a critical role in enabling L2L in RSNNs.
arXiv Detail & Related papers (2025-01-24T14:45:03Z) - Neural Symbolic Regression of Complex Network Dynamics [28.356824329954495]
We propose Physically Inspired Neural Dynamics Regression (PI-NDSR) to automatically learn the symbolic expression of dynamics.
We evaluate our method on synthetic datasets generated by various dynamics and real datasets on disease spreading.
arXiv Detail & Related papers (2024-10-15T02:02:30Z) - Graph-Based Representation Learning of Neuronal Dynamics and Behavior [2.3859858429583665]
We introduce the Temporal Attention-enhanced Variational Graph Recurrent Neural Network (TAVRNN), a novel framework that models time-varying neuronal connectivity.<n>TAVRNN learns latent dynamics at the single-unit level while maintaining interpretable population-level representations.<n>We validate TAVRNN on three diverse datasets: (1) electrophysiological data from a freely behaving rat, (2) primate somatosensory cortex recordings during a reaching task, and (3) biological neurons in the DishBrain platform interacting with a virtual game environment.
arXiv Detail & Related papers (2024-10-01T13:19:51Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - The Cooperative Network Architecture: Learning Structured Networks as Representation of Sensory Patterns [3.9848584845601014]
We introduce the Cooperative Network Architecture (CNA), a model that represents sensory signals using structured, recurrently connected networks of neurons, termed "nets"<n>We demonstrate that net fragments can be learned without supervision and flexibly recombined to encode novel patterns, enabling figure completion and resilience to noise.
arXiv Detail & Related papers (2024-07-08T06:22:10Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.