Probabilistic Gradients for Fast Calibration of Differential Equation
Models
- URL: http://arxiv.org/abs/2009.04239v2
- Date: Mon, 22 Feb 2021 08:08:35 GMT
- Title: Probabilistic Gradients for Fast Calibration of Differential Equation
Models
- Authors: Jon Cockayne and Andrew B. Duncan
- Abstract summary: A crucial bottleneck in state-of-the art calibration methods is the calculation of local sensitivities.
We present a new probabilistic approach to computing local sensitivities.
- Score: 1.066048003460524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Calibration of large-scale differential equation models to observational or
experimental data is a widespread challenge throughout applied sciences and
engineering. A crucial bottleneck in state-of-the art calibration methods is
the calculation of local sensitivities, i.e. derivatives of the loss function
with respect to the estimated parameters, which often necessitates several
numerical solves of the underlying system of partial or ordinary differential
equations. In this paper we present a new probabilistic approach to computing
local sensitivities. The proposed method has several advantages over classical
methods. Firstly, it operates within a constrained computational budget and
provides a probabilistic quantification of uncertainty incurred in the
sensitivities from this constraint. Secondly, information from previous
sensitivity estimates can be recycled in subsequent computations, reducing the
overall computational effort for iterative gradient-based calibration methods.
The methodology presented is applied to two challenging test problems and
compared against classical methods.
Related papers
- Solving Fractional Differential Equations on a Quantum Computer: A Variational Approach [0.1492582382799606]
We introduce an efficient variational hybrid quantum-classical algorithm designed for solving Caputo time-fractional partial differential equations.
Our results indicate that solution fidelity is insensitive to the fractional index and that gradient evaluation cost scales economically with the number of time steps.
arXiv Detail & Related papers (2024-06-13T02:27:16Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - About optimal loss function for training physics-informed neural
networks under respecting causality [0.0]
The advantage of using the modified problem for physics-informed neural networks (PINNs) methodology is that it becomes possible to represent the loss function in the form of a single term associated with differential equations.
Numerical experiments have been carried out for a number of problems, demonstrating the accuracy of the proposed methods.
arXiv Detail & Related papers (2023-04-05T08:10:40Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Symbolic Recovery of Differential Equations: The Identifiability Problem [52.158782751264205]
Symbolic recovery of differential equations is the ambitious attempt at automating the derivation of governing equations.
We provide both necessary and sufficient conditions for a function to uniquely determine the corresponding differential equation.
We then use our results to devise numerical algorithms aiming to determine whether a function solves a differential equation uniquely.
arXiv Detail & Related papers (2022-10-15T17:32:49Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Probabilistic Numerical Method of Lines for Time-Dependent Partial
Differential Equations [20.86460521113266]
Current state-of-the-art PDE solvers treat the space- and time-dimensions separately, serially, and with black-box algorithms.
We introduce a probabilistic version of a technique called method of lines to fix this issue.
Joint quantification of space- and time-uncertainty becomes possible without losing the performance benefits of well-tuned ODE solvers.
arXiv Detail & Related papers (2021-10-22T15:26:05Z) - Galerkin Neural Networks: A Framework for Approximating Variational
Equations with Error Control [0.0]
We present a new approach to using neural networks to approximate the solutions of variational equations.
We use a sequence of finite-dimensional subspaces whose basis functions are realizations of a sequence of neural networks.
arXiv Detail & Related papers (2021-05-28T20:25:40Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Methods to Recover Unknown Processes in Partial Differential Equations
Using Data [2.836285493475306]
We study the problem of identifying unknown processes embedded in time-dependent partial differential equation (PDE) using observational data.
We first conduct theoretical analysis and derive conditions to ensure the solvability of the problem.
We then present a set of numerical approaches, including Galerkin type algorithm and collocation type algorithm.
arXiv Detail & Related papers (2020-03-05T00:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.