Dynamical modeling of nonlinear latent factors in multiscale neural activity with real-time inference
- URL: http://arxiv.org/abs/2512.12462v1
- Date: Sat, 13 Dec 2025 21:20:21 GMT
- Title: Dynamical modeling of nonlinear latent factors in multiscale neural activity with real-time inference
- Authors: Eray Erturk, Maryam M. Shanechi,
- Abstract summary: We develop a learning framework that can enable real-time decoding of target variables from multiple simultaneously recorded neural time-series modalities.<n>We show that our model can aggregate information across modalities with different timescales and distributions and missing samples to improve real-time target decoding.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-time decoding of target variables from multiple simultaneously recorded neural time-series modalities, such as discrete spiking activity and continuous field potentials, is important across various neuroscience applications. However, a major challenge for doing so is that different neural modalities can have different timescales (i.e., sampling rates) and different probabilistic distributions, or can even be missing at some time-steps. Existing nonlinear models of multimodal neural activity do not address different timescales or missing samples across modalities. Further, some of these models do not allow for real-time decoding. Here, we develop a learning framework that can enable real-time recursive decoding while nonlinearly aggregating information across multiple modalities with different timescales and distributions and with missing samples. This framework consists of 1) a multiscale encoder that nonlinearly aggregates information after learning within-modality dynamics to handle different timescales and missing samples in real time, 2) a multiscale dynamical backbone that extracts multimodal temporal dynamics and enables real-time recursive decoding, and 3) modality-specific decoders to account for different probabilistic distributions across modalities. In both simulations and three distinct multiscale brain datasets, we show that our model can aggregate information across modalities with different timescales and distributions and missing samples to improve real-time target decoding. Further, our method outperforms various linear and nonlinear multimodal benchmarks in doing so.
Related papers
- Unsupervised learning of multiscale switching dynamical system models from multimodal neural data [2.714583452862024]
Neural population activity often exhibits regime-dependent non-stationarity in the form of switching dynamics.<n>We develop a novel unsupervised learning algorithm that learns the parameters of switching multiscale dynamical system models using only multiscale neural observations.
arXiv Detail & Related papers (2025-12-14T23:49:12Z) - Multi-modal Gaussian Process Variational Autoencoders for Neural and
Behavioral Data [0.9622208190558754]
We propose an unsupervised latent variable model which extracts temporally evolving shared and independent latents for distinct, simultaneously recorded experimental modalities.
We validate our model on simulated multi-modal data consisting of Poisson spike counts and MNIST images that scale and rotate smoothly over time.
We show that the multi-modal GP-VAE is able to not only identify the shared and independent latent structure across modalities accurately, but provides good reconstructions of both images and neural rates on held-out trials.
arXiv Detail & Related papers (2023-10-04T19:04:55Z) - Integrating Multimodal Data for Joint Generative Modeling of Complex Dynamics [6.848555909346641]
We provide an efficient framework to combine various sources of information for optimal reconstruction.
Our framework is fully textitgenerative, producing, after training, trajectories with the same geometrical and temporal structure as those of the ground truth system.
arXiv Detail & Related papers (2022-12-15T15:21:28Z) - Multi-Task Dynamical Systems [5.881614676989161]
Time series datasets are often composed of a variety of sequences from the same domain, but from different entities.
This paper describes the multi-task dynamical system (MTDS); a general methodology for extending multi-task learning (MTL) to time series models.
We apply the MTDS to motion-capture data of people walking in various styles using a multi-task recurrent neural network (RNN), and to patient drug-response data using a multi-task pharmacodynamic model.
arXiv Detail & Related papers (2022-10-08T13:37:55Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Characterizing and overcoming the greedy nature of learning in
multi-modal deep neural networks [62.48782506095565]
We show that due to the greedy nature of learning in deep neural networks, models tend to rely on just one modality while under-fitting the other modalities.
We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning.
arXiv Detail & Related papers (2022-02-10T20:11:21Z) - Identifying nonlinear dynamical systems from multi-modal time series
data [3.721528851694675]
Empirically observed time series in physics, biology, or medicine are commonly generated by some underlying dynamical system (DS)
There is an increasing interest to harvest machine learning methods to reconstruct this latent DS in a completely data-driven, unsupervised way.
Here we propose a general framework for multi-modal data integration for the purpose of nonlinear DS identification and cross-modal prediction.
arXiv Detail & Related papers (2021-11-04T14:59:28Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Interpretable Deep Representation Learning from Temporal Multi-view Data [4.2179426073904995]
We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data.
We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.
arXiv Detail & Related papers (2020-05-11T15:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.