Disentangling Shared and Private Neural Dynamics with SPIRE: A Latent Modeling Framework for Deep Brain Stimulation
- URL: http://arxiv.org/abs/2510.25023v1
- Date: Tue, 28 Oct 2025 22:45:52 GMT
- Title: Disentangling Shared and Private Neural Dynamics with SPIRE: A Latent Modeling Framework for Deep Brain Stimulation
- Authors: Rahil Soroushmojdehi, Sina Javadzadeh, Mehrnaz Asadi, Terence D. Sanger,
- Abstract summary: SPIRE is a deep multi-encoder autoencoder that factorizes recordings into shared and private latent subspaces.<n>It robustly recovers cross-regional structure and reveals how externals reorganize it.<n>It is applied to intracranial deep brain stimulation (DBS) recordings.
- Score: 0.1259953341639576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Disentangling shared network-level dynamics from region-specific activity is a central challenge in modeling multi-region neural data. We introduce SPIRE (Shared-Private Inter-Regional Encoder), a deep multi-encoder autoencoder that factorizes recordings into shared and private latent subspaces with novel alignment and disentanglement losses. Trained solely on baseline data, SPIRE robustly recovers cross-regional structure and reveals how external perturbations reorganize it. On synthetic benchmarks with ground-truth latents, SPIRE outperforms classical probabilistic models under nonlinear distortions and temporal misalignments. Applied to intracranial deep brain stimulation (DBS) recordings, SPIRE shows that shared latents reliably encode stimulation-specific signatures that generalize across sites and frequencies. These results establish SPIRE as a practical, reproducible tool for analyzing multi-region neural dynamics under stimulation.
Related papers
- BaRISTA: Brain Scale Informed Spatiotemporal Representation of Human Intracranial Neural Activity [1.2744523252873352]
We propose a newtemporal transformer model of neural activity and a corresponding self-supervised latent reconstruction task.<n>We show that adjusting the spatial scale for both token encoding and masked reconstruction significantly impacts downstream decoding.<n>Our method allows for region-level token encoding while also maintaining accurate channel-level neural reconstruction.
arXiv Detail & Related papers (2025-12-13T02:19:33Z) - A Disentangled Low-Rank RNN Framework for Uncovering Neural Connectivity and Dynamics [18.858997104504784]
Disentangled Recurrent Neural Network (DisRNN) is a generative lrRNN framework that assumes group-wise independence among latent dynamics.<n>DisRNN consistently improves the disentanglement and interpretability of learned neural latent trajectories in low-dimensional space.
arXiv Detail & Related papers (2025-11-17T20:49:58Z) - Functional embeddings enable Aggregation of multi-area SEEG recordings over subjects and sessions [0.11083289076967894]
We propose a representation-learning framework that learns a subject-agnostic functional identity for each electrode from multi-region local field potentials.<n>We evaluate this framework on a 20-subject dataset spanning basal ganglia-thalamic regions collected during flexible rest/movement recording sessions.
arXiv Detail & Related papers (2025-10-31T01:23:05Z) - Coupled Transformer Autoencoder for Disentangling Multi-Region Neural Latent Dynamics [8.294287754474894]
Simultaneous recordings from thousands of neurons across multiple brain areas reveal rich mixtures of activity that are shared between regions and dynamics that are unique to each region.<n>We introduce the Coupled Transformer Autoencoder (CTAE) - a sequence model that addresses both (i) non-stationary, non-linear dynamics and (ii) separation of shared versus region-specific structure in a single framework.<n>CTAE employs transformer encoders and decoders to capture long-range neural dynamics and explicitly partitions each region's latent space into shared and private subspaces.
arXiv Detail & Related papers (2025-10-22T22:47:15Z) - Training Deep Normalization-Free Spiking Neural Networks with Lateral Inhibition [52.59263087086756]
Training deep neural networks (SNNs) has critically depended on explicit normalization schemes, such as batch normalization.<n>We propose a normalization-free learning framework that incorporates lateral inhibition inspired by cortical circuits.<n>We show that our framework enables stable training of deep SNNs with biological realism and achieves competitive performance without resorting to explicit normalizations.
arXiv Detail & Related papers (2025-09-27T11:11:30Z) - Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Towards Unified Neural Decoding with Brain Functional Network Modeling [34.13766828046489]
We present Multi-individual Brain Region-Aggregated Network (MIBRAIN), a neural decoding framework.<n>MIBRAIN constructs a whole functional brain network model by integrating intracranial neurophysiological recordings across multiple individuals.<n>Our framework paves the way for robust neural decoding across individuals and offers insights for practical clinical applications.
arXiv Detail & Related papers (2025-05-30T12:10:37Z) - Learning Delays Through Gradients and Structure: Emergence of Spatiotemporal Patterns in Spiking Neural Networks [0.06752396542927405]
We present a Spiking Neural Network (SNN) model that incorporates learnable synaptic delays through two approaches.
In the latter approach, the network selects and prunes connections, optimizing the delays in sparse connectivity settings.
Our results demonstrate the potential of combining delay learning with dynamic pruning to develop efficient SNN models for temporal data processing.
arXiv Detail & Related papers (2024-07-07T11:55:48Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Expressive architectures enhance interpretability of dynamics-based
neural population models [2.294014185517203]
We evaluate the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets.
We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality.
arXiv Detail & Related papers (2022-12-07T16:44:26Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.