Inverse Estimation of Elastic Modulus Using Physics-Informed Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2006.05791v1
- Date: Wed, 20 May 2020 20:14:10 GMT
- Title: Inverse Estimation of Elastic Modulus Using Physics-Informed Generative
Adversarial Networks
- Authors: James E. Warner, Julian Cuevas, Geoffrey F. Bomarito, Patrick E.
Leser, William P. Leser
- Abstract summary: Agenerative adversarial networks (GANs) encode physical laws in the form of partial differential equations (PDEs)
In this work, PI-GANs are demonstrated for the application of elastic modulus estimation in mechanical testing.
Two feed-forward deep neural network generators used to model the deformation and material stiffness across a two dimensional domain.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While standard generative adversarial networks (GANs) rely solely on training
data to learn unknown probability distributions, physics-informed GANs
(PI-GANs) encode physical laws in the form of stochastic partial differential
equations (PDEs) using auto differentiation. By relating observed data to
unobserved quantities of interest through PDEs, PI-GANs allow for the
estimation of underlying probability distributions without their direct
measurement (i.e. inverse problems). The scalable nature of GANs allows
high-dimensional, spatially-dependent probability distributions (i.e., random
fields) to be inferred, while incorporating prior information through PDEs
allows the training datasets to be relatively small.
In this work, PI-GANs are demonstrated for the application of elastic modulus
estimation in mechanical testing. Given measured deformation data, the
underlying probability distribution of spatially-varying elastic modulus
(stiffness) is learned. Two feed-forward deep neural network generators are
used to model the deformation and material stiffness across a two dimensional
domain. Wasserstein GANs with gradient penalty are employed for enhanced
stability. In the absence of explicit training data, it is demonstrated that
the PI-GAN learns to generate realistic, physically-admissible realizations of
material stiffness by incorporating the PDE that relates it to the measured
deformation. It is shown that the statistics (mean, standard deviation,
point-wise distributions, correlation length) of these generated stiffness
samples have good agreement with the true distribution.
Related papers
- Identifying Drift, Diffusion, and Causal Structure from Temporal Snapshots [10.018568337210876]
We present the first comprehensive approach for jointly estimating the drift and diffusion of an SDE from its temporal marginals.
We show that each of these steps areAlterally optimal with respect to the Kullback-Leibler datasets.
arXiv Detail & Related papers (2024-10-30T06:28:21Z) - Adaptive Learning of the Latent Space of Wasserstein Generative Adversarial Networks [7.958528596692594]
We propose a novel framework called the latent Wasserstein GAN (LWGAN)
It fuses the Wasserstein auto-encoder and the Wasserstein GAN so that the intrinsic dimension of the data manifold can be adaptively learned.
We show that LWGAN is able to identify the correct intrinsic dimension under several scenarios.
arXiv Detail & Related papers (2024-09-27T01:25:22Z) - Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Flexible Amortized Variational Inference in qBOLD MRI [56.4324135502282]
Oxygen extraction fraction (OEF) and deoxygenated blood volume (DBV) are more ambiguously determined from the data.
Existing inference methods tend to yield very noisy and underestimated OEF maps, while overestimating DBV.
This work describes a novel probabilistic machine learning approach that can infer plausible distributions of OEF and DBV.
arXiv Detail & Related papers (2022-03-11T10:47:16Z) - Bayesian Deep Learning for Partial Differential Equation Parameter
Discovery with Sparse and Noisy Data [0.0]
We propose to use Bayesian neural networks (BNN) in order to recover the full system states from measurement data.
We show that it is possible to accurately capture physics of varying complexity without overfitting.
We demonstrate our approach on a handful of example applied to physics and non-linear dynamics.
arXiv Detail & Related papers (2021-08-05T19:43:15Z) - AI Giving Back to Statistics? Discovery of the Coordinate System of
Univariate Distributions by Beta Variational Autoencoder [0.0]
The article discusses experiences of training neural networks to classify univariate empirical distributions and to represent them on the two-dimensional latent space forcing disentanglement based on the inputs of cumulative distribution functions (CDF)
The representation on the latent two-dimensional coordinate system can be seen as an additional metadata of the real-world data that disentangles important distribution characteristics, such as shape of the CDF, classification probabilities of underlying theoretical distributions and their parameters, information entropy, and skewness.
arXiv Detail & Related papers (2020-04-06T14:11:13Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.