Exploring hyperelastic material model discovery for human brain cortex:
multivariate analysis vs. artificial neural network approaches
- URL: http://arxiv.org/abs/2310.10762v1
- Date: Mon, 16 Oct 2023 18:49:59 GMT
- Title: Exploring hyperelastic material model discovery for human brain cortex:
multivariate analysis vs. artificial neural network approaches
- Authors: Jixin Hou, Nicholas Filla, Xianyan Chen, Mir Jalil Razavi, Tianming
Liu, and Xianqiao Wang
- Abstract summary: This study aims to identify the most favorable material model for human brain tissue.
We apply artificial neural network and multiple regression methods to a generalization of widely accepted classic models.
- Score: 10.003764827561238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional computational methods, such as the finite element analysis, have
provided valuable insights into uncovering the underlying mechanisms of brain
physical behaviors. However, precise predictions of brain physics require
effective constitutive models to represent the intricate mechanical properties
of brain tissue. In this study, we aimed to identify the most favorable
constitutive material model for human brain tissue. To achieve this, we applied
artificial neural network and multiple regression methods to a generalization
of widely accepted classic models, and compared the results obtained from these
two approaches. To evaluate the applicability and efficacy of the model, all
setups were kept consistent across both methods, except for the approach to
prevent potential overfitting. Our results demonstrate that artificial neural
networks are capable of automatically identifying accurate constitutive models
from given admissible estimators. Nonetheless, the five-term and two-term
neural network models trained under single-mode and multi-mode loading
scenarios, were found to be suboptimal and could be further simplified into
two-term and single-term, respectively, with higher accuracy using multiple
regression. Our findings highlight the importance of hyperparameters for the
artificial neural network and emphasize the necessity for detailed
cross-validations of regularization parameters to ensure optimal selection at a
global level in the development of material constitutive models. This study
validates the applicability and accuracy of artificial neural network to
automatically discover constitutive material models with proper regularization
as well as the benefits in model simplification without compromising accuracy
for traditional multivariable regression.
Related papers
- Latent Variable Sequence Identification for Cognitive Models with Neural Bayes Estimation [7.7227297059345466]
We present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space.
Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models.
arXiv Detail & Related papers (2024-06-20T21:13:39Z) - A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models [13.283281356356161]
We review the literature on statistical theories of neural networks from three perspectives.
Results on excess risks for neural networks are reviewed.
Papers that attempt to answer how the neural network finds the solution that can generalize well on unseen data'' are reviewed.
arXiv Detail & Related papers (2024-01-14T02:30:19Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Frailty Machine: Beyond proportional hazard assumption in neural
survival regressions [30.018173329118184]
We present neural frailty machine (NFM), a powerful and flexible neural modeling framework for survival regressions.
Two concrete models are derived under the framework that extends neural proportional hazard models and non hazard regression models.
We conduct experimental evaluations over $6$ benchmark datasets of different scales, showing that the proposed NFM models outperform state-of-the-art survival models in terms of predictive performance.
arXiv Detail & Related papers (2023-03-18T08:15:15Z) - Toward Physically Plausible Data-Driven Models: A Novel Neural Network
Approach to Symbolic Regression [2.7071541526963805]
This paper proposes a novel neural network-based symbolic regression method.
It constructs physically plausible models based on even very small training data sets and prior knowledge about the system.
We experimentally evaluate the approach on four test systems: the TurtleBot 2 mobile robot, the magnetic manipulation system, the equivalent resistance of two resistors in parallel, and the longitudinal force of the anti-lock braking system.
arXiv Detail & Related papers (2023-02-01T22:05:04Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Learning Queuing Networks by Recurrent Neural Networks [0.0]
We propose a machine-learning approach to derive performance models from data.
We exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations.
This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model.
arXiv Detail & Related papers (2020-02-25T10:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.