Accurate and efficient Simulation of very high-dimensional Neural Mass
Models with distributed-delay Connectome Tensors
- URL: http://arxiv.org/abs/2009.07479v6
- Date: Fri, 10 Jun 2022 03:05:22 GMT
- Title: Accurate and efficient Simulation of very high-dimensional Neural Mass
Models with distributed-delay Connectome Tensors
- Authors: A. Gonz\'alez-Mitjans, D. Paz-Linares, A. Areces-Gonzalez, M. Li, Y.
Wang, ML. Bringas-Vega, and P.A Vald\'es-Sosa
- Abstract summary: This paper introduces methods that efficiently integrates any high-dimensional Neural Mass Models (NMMs) specified by two essential components.
The first is the set of nonlinear Random Differential Equations of the dynamics of each neural mass.
The second is the highly sparse three-dimensional Connectome (CT) that encodes the strength of the connections and the delays of information transfer along the axons of each connection.
- Score: 0.23453441553817037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces methods and a novel toolbox that efficiently integrates
any high-dimensional Neural Mass Models (NMMs) specified by two essential
components. The first is the set of nonlinear Random Differential Equations of
the dynamics of each neural mass. The second is the highly sparse
three-dimensional Connectome Tensor (CT) that encodes the strength of the
connections and the delays of information transfer along the axons of each
connection. Semi-analytical integration of the RDE is done with the Local
Linearization scheme for each neural mass model, which is the only scheme
guaranteeing dynamical fidelity to the original continuous-time nonlinear
dynamic. It also seamlessly allows modeling distributed delays CT with any
level of complexity or realism, as shown by the Moore-Penrose diagram of the
algorithm. This ease of implementation includes models with distributed-delay
CTs. We achieve high computational efficiency by using a tensor representation
of the model that leverages semi-analytic expressions to integrate the Random
Differential Equations (RDEs) underlying the NMM. We discretized the state
equation with Local Linearization via an algebraic formulation. This approach
increases numerical integration speed and efficiency, a crucial aspect of
large-scale NMM simulations. To illustrate the usefulness of the toolbox, we
simulate both a single Zetterberg-Jansen-Rit (ZJR) cortical column and an
interconnected population of such columns. These examples illustrate the
consequence of modifying the CT in these models, especially by introducing
distributed delays. We provide an open-source Matlab live script for the
toolbox.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - A Multi-Grained Symmetric Differential Equation Model for Learning
Protein-Ligand Binding Dynamics [74.93549765488103]
In drug discovery, molecular dynamics simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding.
We show the efficiency and effectiveness of NeuralMD, with a 2000$times$ speedup over standard numerical MD simulation and outperforming all other ML approaches by up to 80% under the stability metric.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Git Re-Basin: Merging Models modulo Permutation Symmetries [3.5450828190071655]
We show how simple algorithms can be used to fit large networks in practice.
We demonstrate the first (to our knowledge) demonstration of zero mode connectivity between independently trained models.
We also discuss shortcomings in the linear mode connectivity hypothesis.
arXiv Detail & Related papers (2022-09-11T10:44:27Z) - Symplectically Integrated Symbolic Regression of Hamiltonian Dynamical
Systems [11.39873640706974]
Symplectically Integrated Symbolic Regression (SISR) is a novel technique for learning physical governing equations from data.
SISR employs a deep symbolic regression approach, using a multi-layer LSTM-RNN with mutation to probabilistically sample Hamiltonian symbolic expressions.
arXiv Detail & Related papers (2022-09-04T03:17:40Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Partitioning sparse deep neural networks for scalable training and
inference [8.282177703075453]
State-of-the-art deep neural networks (DNNs) have significant computational and data management requirements.
Sparsification and pruning methods are shown to be effective in removing a large fraction of connections in DNNs.
The resulting sparse networks present unique challenges to further improve the computational efficiency of training and inference in deep learning.
arXiv Detail & Related papers (2021-04-23T20:05:52Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Estimation of sparse Gaussian graphical models with hidden clustering
structure [8.258451067861932]
We propose a model to estimate the sparse Gaussian graphical models with hidden clustering structure.
We develop a symmetric Gauss-Seidel based alternating direction method of the multipliers.
Numerical experiments on both synthetic data and real data demonstrate the good performance of our model.
arXiv Detail & Related papers (2020-04-17T08:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.