Physics Informed Deep Kernel Learning
- URL: http://arxiv.org/abs/2006.04976v2
- Date: Tue, 18 Jan 2022 19:03:54 GMT
- Title: Physics Informed Deep Kernel Learning
- Authors: Zheng Wang, Wei Xing, Robert Kirby, Shandian Zhe
- Abstract summary: Physics Informed Deep Kernel Learning (PI-DKL) exploits physics knowledge represented by differential equations with latent sources.
For efficient and effective inference, we marginalize out the latent variables and derive a collapsed model evidence lower bound (ELBO)
- Score: 24.033468062984458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep kernel learning is a promising combination of deep neural networks and
nonparametric function learning. However, as a data driven approach, the
performance of deep kernel learning can still be restricted by scarce or
insufficient data, especially in extrapolation tasks. To address these
limitations, we propose Physics Informed Deep Kernel Learning (PI-DKL) that
exploits physics knowledge represented by differential equations with latent
sources. Specifically, we use the posterior function sample of the Gaussian
process as the surrogate for the solution of the differential equation, and
construct a generative component to integrate the equation in a principled
Bayesian hybrid framework. For efficient and effective inference, we
marginalize out the latent variables in the joint probability and derive a
collapsed model evidence lower bound (ELBO), based on which we develop a
stochastic model estimation algorithm. Our ELBO can be viewed as a nice,
interpretable posterior regularization objective. On synthetic datasets and
real-world applications, we show the advantage of our approach in both
prediction accuracy and uncertainty quantification.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Parallel and Limited Data Voice Conversion Using Stochastic Variational
Deep Kernel Learning [2.5782420501870296]
This paper proposes a voice conversion method that works with limited data.
It is based on variational deep kernel learning (SVDKL)
It is possible to estimate non-smooth and more complex functions.
arXiv Detail & Related papers (2023-09-08T16:32:47Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - A hybrid data driven-physics constrained Gaussian process regression
framework with deep kernel for uncertainty quantification [21.972192114861873]
We propose a hybrid data driven-physics constrained Gaussian process regression framework.
We encode the physics knowledge with Boltzmann-Gibbs distribution and derive our model through maximum likelihood (ML) approach.
The proposed model achieves good results in high-dimensional problem, and correctly propagate the uncertainty, with very limited labelled data provided.
arXiv Detail & Related papers (2022-05-13T07:53:49Z) - AutoIP: A United Framework to Integrate Physics into Gaussian Processes [15.108333340471034]
We propose a framework that can integrate all kinds of differential equations into Gaussian processes.
Our method shows improvement upon vanilla GPs in both simulation and several real-world applications.
arXiv Detail & Related papers (2022-02-24T19:02:14Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - The Promises and Pitfalls of Deep Kernel Learning [13.487684503022063]
We identify pathological behavior, including overfitting, on a simple toy example.
We explore this pathology, explaining its origins and considering how it applies to real datasets.
We find that a fully Bayesian treatment of deep kernel learning can rectify this overfitting and obtain the desired performance improvements.
arXiv Detail & Related papers (2021-02-24T07:56:49Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.