A robust solution of a statistical inverse problem in multiscale
computational mechanics using an artificial neural network
- URL: http://arxiv.org/abs/2011.11761v2
- Date: Thu, 11 Feb 2021 14:36:36 GMT
- Title: A robust solution of a statistical inverse problem in multiscale
computational mechanics using an artificial neural network
- Authors: Florent Pled (MSME), Christophe Desceliers (MSME), Tianyu Zhang (MSME)
- Abstract summary: This work addresses the inverse identification of apparent elastic properties of random heterogeneous materials using machine learning based on artificial neural networks.
The proposed neural network-based identification method requires the construction of a database from which an artificial neural network can be trained.
The performances of the trained artificial neural networks are analyzed in terms of mean squared error, linear regression fit and probability distribution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses the inverse identification of apparent elastic properties
of random heterogeneous materials using machine learning based on artificial
neural networks. The proposed neural network-based identification method
requires the construction of a database from which an artificial neural network
can be trained to learn the nonlinear relationship between the hyperparameters
of a prior stochastic model of the random compliance field and some relevant
quantities of interest of an ad hoc multiscale computational model. An initial
database made up with input and target data is first generated from the
computational model, from which a processed database is deduced by conditioning
the input data with respect to the target data using the nonparametric
statistics. Two-and three-layer feedforward artificial neural networks are then
trained from each of the initial and processed databases to construct an
algebraic representation of the nonlinear mapping between the hyperparameters
(network outputs) and the quantities of interest (network inputs). The
performances of the trained artificial neural networks are analyzed in terms of
mean squared error, linear regression fit and probability distribution between
network outputs and targets for both databases. An ad hoc probabilistic model
of the input random vector is finally proposed in order to take into account
uncertainties on the network input and to perform a robustness analysis of the
network output with respect to the input uncertainties level. The capability of
the proposed neural network-based identification method to efficiently solve
the underlying statistical inverse problem is illustrated through two numerical
examples developed within the framework of 2D plane stress linear elasticity,
namely a first validation example on synthetic data obtained through
computational simulations and a second application example on real experimental
data obtained through a physical experiment monitored by digital image
correlation on a real heterogeneous biological material (beef cortical bone).
Related papers
- Fusing CFD and measurement data using transfer learning [49.1574468325115]
We introduce a non-linear method based on neural networks combining simulation and measurement data via transfer learning.<n>In a first step, the neural network is trained on simulation data to learn spatial features of the distributed quantities.<n>The second step involves transfer learning on the measurement data to correct for systematic errors between simulation and measurement by only re-training a small subset of the entire neural network model.
arXiv Detail & Related papers (2025-07-28T07:21:46Z) - Uncertainty propagation in feed-forward neural network models [3.987067170467799]
We develop new uncertainty propagation methods for feed-forward neural network architectures.
We derive analytical expressions for the probability density function (PDF) of the neural network output.
A key finding is that an appropriate linearization of the leaky ReLU activation function yields accurate statistical results.
arXiv Detail & Related papers (2025-03-27T00:16:36Z) - Statistical tuning of artificial neural network [0.0]
This study introduces methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer.
We propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction.
This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks.
arXiv Detail & Related papers (2024-09-24T19:47:03Z) - InVAErt networks for amortized inference and identifiability analysis of lumped parameter hemodynamic models [0.0]
In this study, we use inVAErt networks, a neural network-based, data-driven framework for enhanced digital twin analysis of stiff dynamical systems.
We demonstrate the flexibility and effectiveness of inVAErt networks in the context of physiological inversion of a six-compartment lumped parameter hemodynamic model from synthetic data to real data with missing components.
arXiv Detail & Related papers (2024-08-15T17:07:40Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Deep learning for full-field ultrasonic characterization [7.120879473925905]
This study takes advantage of recent advances in machine learning to establish a physics-based data analytic platform.
Two logics, namely the direct inversion and physics-informed neural networks (PINNs), are explored.
arXiv Detail & Related papers (2023-01-06T05:01:05Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - A deep learning driven pseudospectral PCE based FFT homogenization
algorithm for complex microstructures [68.8204255655161]
It is shown that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
It is shown, that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
arXiv Detail & Related papers (2021-10-26T07:02:14Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Mean-Field and Kinetic Descriptions of Neural Differential Equations [0.0]
In this work we focus on a particular class of neural networks, i.e. the residual neural networks.
We analyze steady states and sensitivity with respect to the parameters of the network, namely the weights and the bias.
A modification of the microscopic dynamics, inspired by residual neural networks, leads to a Fokker-Planck formulation of the network.
arXiv Detail & Related papers (2020-01-07T13:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.