Deep learning for full-field ultrasonic characterization
- URL: http://arxiv.org/abs/2301.02378v1
- Date: Fri, 6 Jan 2023 05:01:05 GMT
- Title: Deep learning for full-field ultrasonic characterization
- Authors: Yang Xu, Fatemeh Pourahmadian, Jian Song, Conglin Wang
- Abstract summary: This study takes advantage of recent advances in machine learning to establish a physics-based data analytic platform.
Two logics, namely the direct inversion and physics-informed neural networks (PINNs), are explored.
- Score: 7.120879473925905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study takes advantage of recent advances in machine learning to
establish a physics-based data analytic platform for distributed reconstruction
of mechanical properties in layered components from full waveform data. In this
vein, two logics, namely the direct inversion and physics-informed neural
networks (PINNs), are explored. The direct inversion entails three steps: (i)
spectral denoising and differentiation of the full-field data, (ii) building
appropriate neural maps to approximate the profile of unknown physical and
regularization parameters on their respective domains, and (iii) simultaneous
training of the neural networks by minimizing the Tikhonov-regularized PDE loss
using data from (i). PINNs furnish efficient surrogate models of complex
systems with predictive capabilities via multitask learning where the field
variables are modeled by neural maps endowed with (scaler or distributed)
auxiliary parameters such as physical unknowns and loss function weights. PINNs
are then trained by minimizing a measure of data misfit subject to the
underlying physical laws as constraints. In this study, to facilitate learning
from ultrasonic data, the PINNs loss adopts (a) wavenumber-dependent Sobolev
norms to compute the data misfit, and (b) non-adaptive weights in a specific
scaling framework to naturally balance the loss objectives by leveraging the
form of PDEs germane to elastic-wave propagation. Both paradigms are examined
via synthetic and laboratory test data. In the latter case, the reconstructions
are performed at multiple frequencies and the results are verified by a set of
complementary experiments highlighting the importance of verification and
validation in data-driven modeling.
Related papers
- InVAErt networks for amortized inference and identifiability analysis of lumped parameter hemodynamic models [0.0]
In this study, we use inVAErt networks, a neural network-based, data-driven framework for enhanced digital twin analysis of stiff dynamical systems.
We demonstrate the flexibility and effectiveness of inVAErt networks in the context of physiological inversion of a six-compartment lumped parameter hemodynamic model from synthetic data to real data with missing components.
arXiv Detail & Related papers (2024-08-15T17:07:40Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Physics-Informed Neural Networks for Material Model Calibration from
Full-Field Displacement Data [0.0]
We propose PINNs for the calibration of models from full-field displacement and global force data in a realistic regime.
We demonstrate that the enhanced PINNs are capable of identifying material parameters from both experimental one-dimensional data and synthetic full-field displacement data.
arXiv Detail & Related papers (2022-12-15T11:01:32Z) - Physics-informed neural networks for gravity currents reconstruction
from limited data [0.0]
The present work investigates the use of physics-informed neural networks (PINNs) for the 3D reconstruction of unsteady gravity currents from limited data.
In the PINN context, the flow fields are reconstructed by training a neural network whose objective function penalizes the mismatch between the network predictions and the observed data.
arXiv Detail & Related papers (2022-11-03T11:27:29Z) - NeuralSI: Structural Parameter Identification in Nonlinear Dynamical
Systems [9.77270939559057]
This paper explores a new framework, dubbed NeuralSI, for structural identification.
Our approach seeks to estimate nonlinear parameters from governing equations.
The trained model can also be extrapolated under both standard and extreme conditions.
arXiv Detail & Related papers (2022-08-26T16:32:51Z) - Pre-training via Denoising for Molecular Property Prediction [53.409242538744444]
We describe a pre-training technique that utilizes large datasets of 3D molecular structures at equilibrium.
Inspired by recent advances in noise regularization, our pre-training objective is based on denoising.
arXiv Detail & Related papers (2022-05-31T22:28:34Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Parameter Estimation with Dense and Convolutional Neural Networks
Applied to the FitzHugh-Nagumo ODE [0.0]
We present deep neural networks using dense and convolutional layers to solve an inverse problem, where we seek to estimate parameters of a Fitz-Nagumo model.
We demonstrate that deep neural networks have the potential to estimate parameters in dynamical models and processes, and they are capable of predicting parameters accurately for the framework.
arXiv Detail & Related papers (2020-12-12T01:20:42Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.