Deterministic and statistical calibration of constitutive models from full-field data with parametric physics-informed neural networks
- URL: http://arxiv.org/abs/2405.18311v1
- Date: Tue, 28 May 2024 16:02:11 GMT
- Title: Deterministic and statistical calibration of constitutive models from full-field data with parametric physics-informed neural networks
- Authors: David Anton, Jendrik-Alexander Tröger, Henning Wessels, Ulrich Römer, Alexander Henkes, Stefan Hartmann,
- Abstract summary: parametric physics-informed neural networks (PINNs) for model calibration from full-field displacement data are investigated.
Due to the fast evaluation of PINNs, calibration can be performed in near real-time.
- Score: 36.136619420474766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The calibration of constitutive models from full-field data has recently gained increasing interest due to improvements in full-field measurement capabilities. In addition to the experimental characterization of novel materials, continuous structural health monitoring is another application that is of great interest. However, monitoring is usually associated with severe time constraints, difficult to meet with standard numerical approaches. Therefore, parametric physics-informed neural networks (PINNs) for constitutive model calibration from full-field displacement data are investigated. In an offline stage, a parametric PINN can be trained to learn a parameterized solution of the underlying partial differential equation. In the subsequent online stage, the parametric PINN then acts as a surrogate for the parameters-to-state map in calibration. We test the proposed approach for the deterministic least-squares calibration of a linear elastic as well as a hyperelastic constitutive model from noisy synthetic displacement data. We further carry out Markov chain Monte Carlo-based Bayesian inference to quantify the uncertainty. A proper statistical evaluation of the results underlines the high accuracy of the deterministic calibration and that the estimated uncertainty is valid. Finally, we consider experimental data and show that the results are in good agreement with a Finite Element Method-based calibration. Due to the fast evaluation of PINNs, calibration can be performed in near real-time. This advantage is particularly evident in many-query applications such as Markov chain Monte Carlo-based Bayesian inference.
Related papers
- Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - Measuring Stochastic Data Complexity with Boltzmann Influence Functions [12.501336941823627]
Estimating uncertainty of a model's prediction on a test point is a crucial part of ensuring reliability and calibration under distribution shifts.
We propose IF-COMP, a scalable and efficient approximation of the pNML distribution that linearizes the model with a temperature-scaled Boltzmann influence function.
We experimentally validate IF-COMP on uncertainty calibration, mislabel detection, and OOD detection tasks, where it consistently matches or beats strong baseline methods.
arXiv Detail & Related papers (2024-06-04T20:01:39Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Physics-Informed Neural Networks for Material Model Calibration from
Full-Field Displacement Data [0.0]
We propose PINNs for the calibration of models from full-field displacement and global force data in a realistic regime.
We demonstrate that the enhanced PINNs are capable of identifying material parameters from both experimental one-dimensional data and synthetic full-field displacement data.
arXiv Detail & Related papers (2022-12-15T11:01:32Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Theoretical characterization of uncertainty in high-dimensional linear
classification [24.073221004661427]
We show that uncertainty for learning from limited number of samples of high-dimensional input data and labels can be obtained by the approximate message passing algorithm.
We discuss how over-confidence can be mitigated by appropriately regularising, and show that cross-validating with respect to the loss leads to better calibration than with the 0/1 error.
arXiv Detail & Related papers (2022-02-07T15:32:07Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Computer Model Calibration with Time Series Data using Deep Learning and
Quantile Regression [1.6758573326215689]
The existing standard calibration framework suffers from inferential issues when the model output and observational data are high-dimensional dependent data.
We propose a new calibration framework based on a deep neural network (DNN) with long-short term memory layers that directly emulates the inverse relationship between the model output and input parameters.
arXiv Detail & Related papers (2020-08-29T22:18:41Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.