Data-driven Uncertainty Quantification in Computational Human Head
Models
- URL: http://arxiv.org/abs/2110.15553v1
- Date: Fri, 29 Oct 2021 05:42:31 GMT
- Title: Data-driven Uncertainty Quantification in Computational Human Head
Models
- Authors: Kshitiz Upadhyay, Dimitris G. Giovanis, Ahmed Alshareef, Andrew K.
Knutsen, Curtis L. Johnson, Aaron Carass, Philip V. Bayly, Michael D.
Shields, K.T. Ramesh
- Abstract summary: Modern biofidelic head model simulations are associated with very high computational cost and high-dimensional inputs and outputs.
In this study, a two-stage, data-driven manifold learning-based framework is proposed for uncertainty quantification (UQ) of computational head models.
It is demonstrated that the surrogate models provide highly accurate approximations of the computational model while significantly reducing the computational cost.
- Score: 0.6745502291821954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational models of the human head are promising tools for estimating the
impact-induced response of brain, and thus play an important role in the
prediction of traumatic brain injury. Modern biofidelic head model simulations
are associated with very high computational cost, and high-dimensional inputs
and outputs, which limits the applicability of traditional uncertainty
quantification (UQ) methods on these systems. In this study, a two-stage,
data-driven manifold learning-based framework is proposed for UQ of
computational head models. This framework is demonstrated on a 2D
subject-specific head model, where the goal is to quantify uncertainty in the
simulated strain fields (i.e., output), given variability in the material
properties of different brain substructures (i.e., input). In the first stage,
a data-driven method based on multi-dimensional Gaussian kernel-density
estimation and diffusion maps is used to generate realizations of the input
random vector directly from the available data. Computational simulations of a
small number of realizations provide input-output pairs for training
data-driven surrogate models in the second stage. The surrogate models employ
nonlinear dimensionality reduction using Grassmannian diffusion maps, Gaussian
process regression to create a low-cost mapping between the input random vector
and the reduced solution space, and geometric harmonics models for mapping
between the reduced space and the Grassmann manifold. It is demonstrated that
the surrogate models provide highly accurate approximations of the
computational model while significantly reducing the computational cost. Monte
Carlo simulations of the surrogate models are used for uncertainty propagation.
UQ of strain fields highlight significant spatial variation in model
uncertainty, and reveal key differences in uncertainty among commonly used
strain-based brain injury predictor variables.
Related papers
- Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Application of probabilistic modeling and automated machine learning
framework for high-dimensional stress field [1.073039474000799]
We propose an end-to-end approach that maps a high-dimensional image like input to an output of high dimensionality or its key statistics.
Our approach uses two main framework that perform three steps: a) reduce the input and output from a high-dimensional space to a reduced or low-dimensional space, b) model the input-output relationship in the low-dimensional space, and c) enable the incorporation of domain-specific physical constraints as masks.
arXiv Detail & Related papers (2023-03-15T13:10:58Z) - High-dimensional Measurement Error Models for Lipschitz Loss [2.6415509201394283]
We develop high-dimensional measurement error models for a class of Lipschitz loss functions.
Our estimator is designed to minimize the $L_1$ norm among all estimators belonging to suitable feasible sets.
We derive theoretical guarantees in terms of finite sample statistical error bounds and sign consistency, even when the dimensionality increases exponentially with the sample size.
arXiv Detail & Related papers (2022-10-26T20:06:05Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Generative models and Bayesian inversion using Laplace approximation [0.3670422696827525]
Recently, inverse problems were solved using generative models as highly informative priors.
We show that derived Bayes estimates are consistent, in contrast to the approach employing the low-dimensional manifold of the generative model.
arXiv Detail & Related papers (2022-03-15T10:05:43Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Low-rank statistical finite elements for scalable model-data synthesis [0.8602553195689513]
statFEM acknowledges a priori model misspecification, by embedding forcing within the governing equations.
The method reconstructs the observed data-generating processes with minimal loss of information.
This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix.
arXiv Detail & Related papers (2021-09-10T09:51:43Z) - Supervised Autoencoders Learn Robust Joint Factor Models of Neural
Activity [2.8402080392117752]
neuroscience applications collect high-dimensional predictors' corresponding to brain activity in different regions along with behavioral outcomes.
Joint factor models for the predictors and outcomes are natural, but maximum likelihood estimates of these models can struggle in practice when there is model misspecification.
We propose an alternative inference strategy based on supervised autoencoders; rather than placing a probability distribution on the latent factors, we define them as an unknown function of the high-dimensional predictors.
arXiv Detail & Related papers (2020-04-10T19:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.