Error estimation for physics-informed neural networks with implicit
Runge-Kutta methods
- URL: http://arxiv.org/abs/2401.05211v1
- Date: Wed, 10 Jan 2024 15:18:56 GMT
- Title: Error estimation for physics-informed neural networks with implicit
Runge-Kutta methods
- Authors: Jochen Stiasny, Spyros Chatzivasileiadis
- Abstract summary: In this work, we propose to use the NN's predictions in a high-order implicit Runge-Kutta (IRK) method.
The residuals in the implicit system of equations can be related to the NN's prediction error, hence, we can provide an error estimate at several points along a trajectory.
We find that this error estimate highly correlates with the NN's prediction error and that increasing the order of the IRK method improves this estimate.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to accurately approximate trajectories of dynamical systems
enables their analysis, prediction, and control. Neural network (NN)-based
approximations have attracted significant interest due to fast evaluation with
good accuracy over long integration time steps. In contrast to established
numerical approximation schemes such as Runge-Kutta methods, the estimation of
the error of the NN-based approximations proves to be difficult. In this work,
we propose to use the NN's predictions in a high-order implicit Runge-Kutta
(IRK) method. The residuals in the implicit system of equations can be related
to the NN's prediction error, hence, we can provide an error estimate at
several points along a trajectory. We find that this error estimate highly
correlates with the NN's prediction error and that increasing the order of the
IRK method improves this estimate. We demonstrate this estimation methodology
for Physics-Informed Neural Network (PINNs) on the logistic equation as an
illustrative example and then apply it to a four-state electric generator model
that is regularly used in power system modelling.
Related papers
- Bias-Reduced Neural Networks for Parameter Estimation in Quantitative MRI [0.13654846342364307]
We develop neural network (NN)-based quantitative MRI parameter estimators with minimal bias and a variance close to the Cram'er-Rao bound.
arXiv Detail & Related papers (2023-11-13T20:41:48Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Advantage of Deep Neural Networks for Estimating Functions with
Singularity on Hypersurfaces [23.21591478556582]
We develop a minimax rate analysis to describe the reason that deep neural networks (DNNs) perform better than other standard methods.
This study tries to fill this gap by considering the estimation for a class of non-smooth functions that have singularities on hypersurfaces.
arXiv Detail & Related papers (2020-11-04T12:51:14Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - A Deterministic Approximation to Neural SDEs [38.23826389188657]
We show that obtaining well-calibrated uncertainty estimations from NSDEs is computationally prohibitive.
We develop a computationally affordable deterministic scheme which accurately approximates the transition kernel.
Our method also improves prediction accuracy thanks to the numerical stability of deterministic training.
arXiv Detail & Related papers (2020-06-16T08:00:26Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Probabilistic solution of chaotic dynamical system inverse problems
using Bayesian Artificial Neural Networks [0.0]
Inverse problems for chaotic systems are numerically challenging.
Small perturbations in model parameters can cause very large changes in estimated forward trajectories.
Bizarre Artificial Neural Networks can be used to simultaneously fit a model and estimate model parameter uncertainty.
arXiv Detail & Related papers (2020-05-26T20:35:02Z) - Efficient Uncertainty Quantification for Dynamic Subsurface Flow with
Surrogate by Theory-guided Neural Network [0.0]
We propose a methodology for efficient uncertainty quantification for dynamic subsurface flow with a surrogate constructed by the Theory-guided Neural Network (TgNN)
parameters, time and location comprise the input of the neural network, while the quantity of interest is the output.
The trained neural network can predict solutions of subsurface flow problems with new parameters.
arXiv Detail & Related papers (2020-04-25T12:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.