Branched Latent Neural Maps
- URL: http://arxiv.org/abs/2308.02599v2
- Date: Fri, 29 Sep 2023 18:06:24 GMT
- Title: Branched Latent Neural Maps
- Authors: Matteo Salvador, Alison Lesley Marsden
- Abstract summary: Branched Latent Neural Maps (BLNMs) learn finite dimensional input-output maps encoding complex physical processes.
BLNMs show excellent generalization properties with small training datasets and short training times on a single processor.
In the online phase, the BLNM allows for 5000x faster real-time simulations of cardiac electrophysiology on a single core standard computer.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Branched Latent Neural Maps (BLNMs) to learn finite dimensional
input-output maps encoding complex physical processes. A BLNM is defined by a
simple and compact feedforward partially-connected neural network that
structurally disentangles inputs with different intrinsic roles, such as the
time variable from model parameters of a differential equation, while
transferring them into a generic field of interest. BLNMs leverage latent
outputs to enhance the learned dynamics and break the curse of dimensionality
by showing excellent generalization properties with small training datasets and
short training times on a single processor. Indeed, their generalization error
remains comparable regardless of the adopted discretization during the testing
phase. Moreover, the partial connections significantly reduce the number of
tunable parameters. We show the capabilities of BLNMs in a challenging test
case involving electrophysiology simulations in a biventricular cardiac model
of a pediatric patient with hypoplastic left heart syndrome. The model includes
a 1D Purkinje network for fast conduction and a 3D heart-torso geometry.
Specifically, we trained BLNMs on 150 in silico generated 12-lead
electrocardiograms (ECGs) while spanning 7 model parameters, covering
cell-scale and organ-level. Although the 12-lead ECGs manifest very fast
dynamics with sharp gradients, after automatic hyperparameter tuning the
optimal BLNM, trained in less than 3 hours on a single CPU, retains just 7
hidden layers and 19 neurons per layer. The resulting mean square error is on
the order of $10^{-4}$ on a test dataset comprised of 50 electrophysiology
simulations. In the online phase, the BLNM allows for 5000x faster real-time
simulations of cardiac electrophysiology on a single core standard computer and
can be used to solve inverse problems via global optimization in a few seconds
of computational time.
Related papers
- The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Real-time whole-heart electromechanical simulations using Latent Neural
Ordinary Differential Equations [2.208529796170897]
We use Latent Neural Ordinary Differential Equations to learn the temporal pressure-volume dynamics of a heart failure patient.
Our surrogate model based on LNODEs is trained from 400 3D-0D whole-heart closed-loop electromechanical simulations.
This paper introduces the most advanced surrogate model of cardiac function available in the literature.
arXiv Detail & Related papers (2023-06-08T16:13:29Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Efficient ECG-based Atrial Fibrillation Detection via Parameterised
Hypercomplex Neural Networks [11.964843902569925]
Atrial fibrillation (AF) is the most common cardiac arrhythmia and associated with a high risk for serious conditions like stroke.
Wearable devices embedded with automatic and timely AF assessment from electrocardiograms (ECGs) has shown to be promising in preventing life-threatening situations.
Deep neural networks have demonstrated superiority in model performance, their use on wearable devices is limited by the trade-off between model performance and complexity.
arXiv Detail & Related papers (2022-10-27T14:24:48Z) - DLDNN: Deterministic Lateral Displacement Design Automation by Neural
Networks [1.8365768330479992]
This paper investigates a fast versatile design automation platform to address Deterministic lateral displacement (DLD) problems.
convolutional and artificial neural networks were employed to learn velocity fields and critical diameters of a range of DLD configurations.
The developed tool was tested for 12 critical conditions and performed reliably with errors of less than 4%.
arXiv Detail & Related papers (2022-08-30T14:38:17Z) - Atrial Fibrillation Detection Using Weight-Pruned, Log-Quantised
Convolutional Neural Networks [25.160063477248904]
A convolutional neural network model is developed for detecting atrial fibrillation from electrocardiogram signals.
The model demonstrates high performance despite being trained on limited, variable-length input data.
The final model achieved a 91.1% model compression ratio while maintaining high model accuracy of 91.7% and less than 1% loss.
arXiv Detail & Related papers (2022-06-14T11:47:04Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Generalizing electrocardiogram delineation: training convolutional
neural networks with synthetic data augmentation [63.51064808536065]
Existing databases for ECG delineation are small, being insufficient in size and in the array of pathological conditions they represent.
This article delves has two main contributions. First, a pseudo-synthetic data generation algorithm was developed, based in probabilistically composing ECG traces given "pools" of fundamental segments, as cropped from the original databases, and a set of rules for their arrangement into coherent synthetic traces.
Second, two novel segmentation-based loss functions have been developed, which attempt at enforcing the prediction of an exact number of independent structures and at producing closer segmentation boundaries by focusing on a reduced number of samples.
arXiv Detail & Related papers (2021-11-25T10:11:41Z) - Deep learning-based reduced order models in cardiac electrophysiology [0.0]
We propose a new, nonlinear approach which exploits deep learning (DL) algorithms to obtain accurate and efficient reduced order models (ROMs)
Our DL approach combines deep feedforward neural networks (NNs) and convolutional autoencoders (AEs)
We show that the proposed DL-ROM framework can efficiently provide solutions to parametrized electrophysiology problems, thus enabling multi-scenario analysis in pathological cases.
arXiv Detail & Related papers (2020-06-02T23:05:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.