Deep-learning based discovery of partial differential equations in
integral form from sparse and noisy data
- URL: http://arxiv.org/abs/2011.11981v1
- Date: Tue, 24 Nov 2020 09:18:39 GMT
- Title: Deep-learning based discovery of partial differential equations in
integral form from sparse and noisy data
- Authors: Hao Xu, Dongxiao Zhang, Nanzhe Wang
- Abstract summary: A new framework combining deep-learning and integral form is proposed to handle the above-mentioned problems simultaneously.
Our proposed algorithm is more robust to noise and more accurate compared with existing methods due to the utilization of integral form.
- Score: 2.745859263816099
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data-driven discovery of partial differential equations (PDEs) has attracted
increasing attention in recent years. Although significant progress has been
made, certain unresolved issues remain. For example, for PDEs with high-order
derivatives, the performance of existing methods is unsatisfactory, especially
when the data are sparse and noisy. It is also difficult to discover
heterogeneous parametric PDEs where heterogeneous parameters are embedded in
the partial differential operators. In this work, a new framework combining
deep-learning and integral form is proposed to handle the above-mentioned
problems simultaneously, and improve the accuracy and stability of PDE
discovery. In the framework, a deep neural network is firstly trained with
observation data to generate meta-data and calculate derivatives. Then, a
unified integral form is defined, and the genetic algorithm is employed to
discover the best structure. Finally, the value of parameters is calculated,
and whether the parameters are constants or variables is identified. Numerical
experiments proved that our proposed algorithm is more robust to noise and more
accurate compared with existing methods due to the utilization of integral
form. Our proposed algorithm is also able to discover PDEs with high-order
derivatives or heterogeneous parameters accurately with sparse and noisy data.
Related papers
- Physics-informed AI and ML-based sparse system identification algorithm for discovery of PDE's representing nonlinear dynamic systems [0.0]
The proposed method is demonstrated to discover various differential equations at various noise levels, including three-dimensional, fourth-order, and stiff equations.
The parameter estimation converges accurately to the true values with a small coefficient of variation, suggesting robustness to the noise.
arXiv Detail & Related papers (2024-10-13T21:48:51Z) - Adaptation of uncertainty-penalized Bayesian information criterion for parametric partial differential equation discovery [1.1049608786515839]
We introduce an extension of the uncertainty-penalized Bayesian information criterion (UBIC) to solve parametric PDE discovery problems efficiently.
UBIC uses quantified PDE uncertainty over different temporal or spatial points to prevent overfitting in model selection.
We show that our extended UBIC can identify the true number of terms and their varying coefficients accurately, even in the presence of noise.
arXiv Detail & Related papers (2024-08-15T12:10:50Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - D-CIPHER: Discovery of Closed-form Partial Differential Equations [80.46395274587098]
We propose D-CIPHER, which is robust to measurement artifacts and can uncover a new and very general class of differential equations.
We further design a novel optimization procedure, CoLLie, to help D-CIPHER search through this class efficiently.
arXiv Detail & Related papers (2022-06-21T17:59:20Z) - Measuring dissimilarity with diffeomorphism invariance [94.02751799024684]
We introduce DID, a pairwise dissimilarity measure applicable to a wide range of data spaces.
We prove that DID enjoys properties which make it relevant for theoretical study and practical use.
arXiv Detail & Related papers (2022-02-11T13:51:30Z) - Robust discovery of partial differential equations in complex situations [3.7314701799132686]
A robust deep learning-genetic algorithm (R-DLGA) that incorporates the physics-informed neural network (PINN) is proposed in this work.
The stability and accuracy of the proposed R-DLGA in several complex situations are examined for proof-and-concept.
Results prove that the proposed framework is able to calculate derivatives accurately with the optimization of PINN.
arXiv Detail & Related papers (2021-05-31T02:11:59Z) - Weak SINDy For Partial Differential Equations [0.0]
We extend our Weak SINDy (WSINDy) framework to the setting of partial differential equations (PDEs)
The elimination of pointwise derivative approximations via the weak form enables effective machine-precision recovery of model coefficients from noise-free data.
We demonstrate WSINDy's robustness, speed and accuracy on several challenging PDEs.
arXiv Detail & Related papers (2020-07-06T16:03:51Z) - Deep-learning of Parametric Partial Differential Equations from Sparse
and Noisy Data [2.4431531175170362]
In this work, a new framework, which combines neural network, genetic algorithm and adaptive methods, is put forward to address all of these challenges simultaneously.
A trained neural network is utilized to calculate derivatives and generate a large amount of meta-data, which solves the problem of sparse noisy data.
Next, genetic algorithm is utilized to discover the form of PDEs and corresponding coefficients with an incomplete candidate library.
A two-step adaptive method is introduced to discover parametric PDEs with spatially- or temporally-varying coefficients.
arXiv Detail & Related papers (2020-05-16T09:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.