NP-ODE: Neural Process Aided Ordinary Differential Equations for
Uncertainty Quantification of Finite Element Analysis
- URL: http://arxiv.org/abs/2012.06914v1
- Date: Sat, 12 Dec 2020 22:38:16 GMT
- Title: NP-ODE: Neural Process Aided Ordinary Differential Equations for
Uncertainty Quantification of Finite Element Analysis
- Authors: Yinan Wang, Kaiwen Wang, Wenjun Cai, Xiaowei Yue
- Abstract summary: A physics-informed data-driven surrogate model, named Neural Process Aided Ordinary Differential Equation (NP-ODE), is proposed to model the FEA simulations.
The results show that the proposed NP-ODE outperforms benchmark methods.
- Score: 2.9210447295585724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finite element analysis (FEA) has been widely used to generate simulations of
complex and nonlinear systems. Despite its strength and accuracy, the
limitations of FEA can be summarized into two aspects: a) running high-fidelity
FEA often requires significant computational cost and consumes a large amount
of time; b) FEA is a deterministic method that is insufficient for uncertainty
quantification (UQ) when modeling complex systems with various types of
uncertainties. In this paper, a physics-informed data-driven surrogate model,
named Neural Process Aided Ordinary Differential Equation (NP-ODE), is proposed
to model the FEA simulations and capture both input and output uncertainties.
To validate the advantages of the proposed NP-ODE, we conduct experiments on
both the simulation data generated from a given ordinary differential equation
and the data collected from a real FEA platform for tribocorrosion. The
performances of the proposed NP-ODE and several benchmark methods are compared.
The results show that the proposed NP-ODE outperforms benchmark methods. The
NP-ODE method realizes the smallest predictive error as well as generates the
most reasonable confidence interval having the best coverage on testing data
points.
Related papers
- Probabilistic Numeric SMC Sampling for Bayesian Nonlinear System Identification in Continuous Time [0.0]
In engineering, accurately modeling nonlinear dynamic systems from data contaminated by noise is both essential and complex.
The integration of continuous-time ordinary differential equations (ODEs) is crucial for aligning theoretical models with discretely sampled data.
This paper demonstrates the application of a probabilistic numerical method for solving ODEs in the joint parameter-state identification of nonlinear dynamic systems.
arXiv Detail & Related papers (2024-04-19T14:52:14Z) - Uncertainty quantification for deep learning-based schemes for solving
high-dimensional backward stochastic differential equations [5.883258964010963]
We study uncertainty quantification (UQ) for a class of deep learning-based BSDE schemes.
We develop a UQ model that efficiently estimates the STD of the approximate solution using only a single run of the algorithm.
Our numerical experiments show that the UQ model produces reliable estimates of the mean and STD of the approximate solution.
arXiv Detail & Related papers (2023-10-05T09:00:48Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Deep Learning Aided Laplace Based Bayesian Inference for Epidemiological
Systems [2.596903831934905]
We propose a hybrid approach where Laplace-based Bayesian inference is combined with an ANN architecture for obtaining approximations to the ODE trajectories.
The effectiveness of our proposed methods is demonstrated using an epidemiological system with non-analytical solutions, the Susceptible-Infectious-Removed (SIR) model for infectious diseases.
arXiv Detail & Related papers (2022-10-17T09:02:41Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Weak SINDy For Partial Differential Equations [0.0]
We extend our Weak SINDy (WSINDy) framework to the setting of partial differential equations (PDEs)
The elimination of pointwise derivative approximations via the weak form enables effective machine-precision recovery of model coefficients from noise-free data.
We demonstrate WSINDy's robustness, speed and accuracy on several challenging PDEs.
arXiv Detail & Related papers (2020-07-06T16:03:51Z) - Extreme Theory of Functional Connections: A Physics-Informed Neural
Network Method for Solving Parametric Differential Equations [0.0]
We present a physics-informed method for solving problems involving parametric differential equations (DEs) called X-TFC.
X-TFC differs from PINN and Deep-TFC; whereas PINN and Deep-TFC use a deep-NN, X-TFC uses a single-layer NN, or more precisely, an Extreme Learning Machine, ELM.
arXiv Detail & Related papers (2020-05-15T22:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.