Few-shot Generation of Personalized Neural Surrogates for Cardiac
Simulation via Bayesian Meta-Learning
- URL: http://arxiv.org/abs/2210.02967v1
- Date: Thu, 6 Oct 2022 14:59:27 GMT
- Title: Few-shot Generation of Personalized Neural Surrogates for Cardiac
Simulation via Bayesian Meta-Learning
- Authors: Xiajun Jiang, Zhiyuan Li, Ryan Missel, Md Shakil Zaman, Brian Zenger,
Wilson W. Good, Rob S. MacLeod, John L. Sapp, Linwei Wang
- Abstract summary: We present a new concept to achieve personalized neural surrogates in a single coherent framework of meta-learning.
As test time, metaPNS delivers a personalized neural surrogate by fast feed-forward embedding of a small and flexible number of data available from an individual.
metaPNS was able to improve personalization and predictive accuracy in comparison to conventionally-optimized cardiac simulation models.
- Score: 6.978382728087236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clinical adoption of personalized virtual heart simulations faces challenges
in model personalization and expensive computation. While an ideal solution is
an efficient neural surrogate that at the same time is personalized to an
individual subject, the state-of-the-art is either concerned with personalizing
an expensive simulation model, or learning an efficient yet generic surrogate.
This paper presents a completely new concept to achieve personalized neural
surrogates in a single coherent framework of meta-learning (metaPNS). Instead
of learning a single neural surrogate, we pursue the process of learning a
personalized neural surrogate using a small amount of context data from a
subject, in a novel formulation of few-shot generative modeling underpinned by:
1) a set-conditioned neural surrogate for cardiac simulation that, conditioned
on subject-specific context data, learns to generate query simulations not
included in the context set, and 2) a meta-model of amortized variational
inference that learns to condition the neural surrogate via simple feed-forward
embedding of context data. As test time, metaPNS delivers a personalized neural
surrogate by fast feed-forward embedding of a small and flexible number of data
available from an individual, achieving -- for the first time --
personalization and surrogate construction for expensive simulations in one
end-to-end learning framework. Synthetic and real-data experiments demonstrated
that metaPNS was able to improve personalization and predictive accuracy in
comparison to conventionally-optimized cardiac simulation models, at a fraction
of computation.
Related papers
- MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Exploring hyperelastic material model discovery for human brain cortex:
multivariate analysis vs. artificial neural network approaches [10.003764827561238]
This study aims to identify the most favorable material model for human brain tissue.
We apply artificial neural network and multiple regression methods to a generalization of widely accepted classic models.
arXiv Detail & Related papers (2023-10-16T18:49:59Z) - Epistemic Modeling Uncertainty of Rapid Neural Network Ensembles for
Adaptive Learning [0.0]
A new type of neural network is presented using the rapid neural network paradigm.
It is found that the proposed emulator embedded neural network trains near-instantaneously, typically without loss of prediction accuracy.
arXiv Detail & Related papers (2023-09-12T22:34:34Z) - A Generative Modeling Framework for Inferring Families of Biomechanical
Constitutive Laws in Data-Sparse Regimes [0.15658704610960567]
We propose a novel approach to efficiently infer families of relationships in data-sparse regimes.
Inspired by the concept of functional priors, we develop a generative network (GAN) that incorporates a neural operator as the generator and a fully-connected network as the adversarial discriminator.
arXiv Detail & Related papers (2023-05-04T22:07:27Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.