Methods to Recover Unknown Processes in Partial Differential Equations
Using Data
- URL: http://arxiv.org/abs/2003.02387v1
- Date: Thu, 5 Mar 2020 00:50:08 GMT
- Title: Methods to Recover Unknown Processes in Partial Differential Equations
Using Data
- Authors: Zhen Chen, Kailiang Wu, Dongbin Xiu
- Abstract summary: We study the problem of identifying unknown processes embedded in time-dependent partial differential equation (PDE) using observational data.
We first conduct theoretical analysis and derive conditions to ensure the solvability of the problem.
We then present a set of numerical approaches, including Galerkin type algorithm and collocation type algorithm.
- Score: 2.836285493475306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of identifying unknown processes embedded in
time-dependent partial differential equation (PDE) using observational data,
with an application to advection-diffusion type PDE. We first conduct
theoretical analysis and derive conditions to ensure the solvability of the
problem. We then present a set of numerical approaches, including Galerkin type
algorithm and collocation type algorithm. Analysis of the algorithms are
presented, along with their implementation detail. The Galerkin algorithm is
more suitable for practical situations, particularly those with noisy data, as
it avoids using derivative/gradient data. Various numerical examples are then
presented to demonstrate the performance and properties of the numerical
methods.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Symbolic Recovery of Differential Equations: The Identifiability Problem [52.158782751264205]
Symbolic recovery of differential equations is the ambitious attempt at automating the derivation of governing equations.
We provide both necessary and sufficient conditions for a function to uniquely determine the corresponding differential equation.
We then use our results to devise numerical algorithms aiming to determine whether a function solves a differential equation uniquely.
arXiv Detail & Related papers (2022-10-15T17:32:49Z) - D-CIPHER: Discovery of Closed-form Partial Differential Equations [80.46395274587098]
We propose D-CIPHER, which is robust to measurement artifacts and can uncover a new and very general class of differential equations.
We further design a novel optimization procedure, CoLLie, to help D-CIPHER search through this class efficiently.
arXiv Detail & Related papers (2022-06-21T17:59:20Z) - Accelerating numerical methods by gradient-based meta-solving [15.90188271828615]
In science and engineering applications, it is often required to solve similar computational problems repeatedly.
We propose a gradient-based algorithm to solve them in a unified way.
We demonstrate the performance and versatility of our method through theoretical analysis and numerical experiments.
arXiv Detail & Related papers (2022-06-17T07:31:18Z) - Probabilistic Numerical Method of Lines for Time-Dependent Partial
Differential Equations [20.86460521113266]
Current state-of-the-art PDE solvers treat the space- and time-dimensions separately, serially, and with black-box algorithms.
We introduce a probabilistic version of a technique called method of lines to fix this issue.
Joint quantification of space- and time-uncertainty becomes possible without losing the performance benefits of well-tuned ODE solvers.
arXiv Detail & Related papers (2021-10-22T15:26:05Z) - Data-Driven Theory-guided Learning of Partial Differential Equations
using SimultaNeous Basis Function Approximation and Parameter Estimation
(SNAPE) [0.0]
We propose a technique of parameter estimation of partial differential equations (PDEs) that is robust against high levels of noise 100 %.
SNAPE not only demonstrates its applicability on various complex dynamic systems that encompass wide scientific domains.
The method systematically combines the knowledge of well-established scientific theories and the concepts of data science to infer the properties of the process from the observed data.
arXiv Detail & Related papers (2021-09-14T22:54:30Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Numerically Solving Parametric Families of High-Dimensional Kolmogorov
Partial Differential Equations via Deep Learning [8.019491256870557]
We present a deep learning algorithm for the numerical solution of parametric families of high-dimensional linear Kolmogorov partial differential equations (PDEs)
Our method is based on reformulating the numerical approximation of a whole family of Kolmogorov PDEs as a single statistical learning problem using the Feynman-Kac formula.
We show that a single deep neural network trained on simulated data is capable of learning the solution functions of an entire family of PDEs on a full space-time region.
arXiv Detail & Related papers (2020-11-09T17:57:11Z) - Deep-learning of Parametric Partial Differential Equations from Sparse
and Noisy Data [2.4431531175170362]
In this work, a new framework, which combines neural network, genetic algorithm and adaptive methods, is put forward to address all of these challenges simultaneously.
A trained neural network is utilized to calculate derivatives and generate a large amount of meta-data, which solves the problem of sparse noisy data.
Next, genetic algorithm is utilized to discover the form of PDEs and corresponding coefficients with an incomplete candidate library.
A two-step adaptive method is introduced to discover parametric PDEs with spatially- or temporally-varying coefficients.
arXiv Detail & Related papers (2020-05-16T09:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.