PDE-constrained Gaussian process surrogate modeling with uncertain data locations
- URL: http://arxiv.org/abs/2305.11586v3
- Date: Mon, 30 Dec 2024 19:44:54 GMT
- Title: PDE-constrained Gaussian process surrogate modeling with uncertain data locations
- Authors: Dongwei Ye, Weihao Yan, Christoph Brune, Mengwu Guo,
- Abstract summary: We propose a Bayesian approach that integrates the variability of input data into the Gaussian process regression for function and partial differential equation approximation.
A consistently good performance of generalization is observed, and a substantial reduction in the predictive uncertainties is achieved.
- Score: 1.943678022072958
- License:
- Abstract: Gaussian process regression is widely applied in computational science and engineering for surrogate modeling owning to its kernel-based and probabilistic nature. In this work, we propose a Bayesian approach that integrates the variability of input data into the Gaussian process regression for function and partial differential equation approximation. Leveraging two types of observables -- noise-corrupted outputs with certain inputs and those with prior-distribution-defined uncertain inputs, a posterior distribution of uncertain inputs is estimated via Bayesian inference. Thereafter, such quantified uncertainties of inputs are incorporated into Gaussian process predictions by means of marginalization. The setting of two types of data aligned with common scenarios of constructing surrogate models for the solutions of partial differential equations, where the data of boundary conditions and initial conditions are typically known while the data of solution may involve uncertainties due to the measurement or stochasticity. The effectiveness of the proposed method is demonstrated through several numerical examples including multiple one-dimensional functions, the heat equation and Allen-Cahn equation. A consistently good performance of generalization is observed, and a substantial reduction in the predictive uncertainties is achieved by the Bayesian inference of uncertain inputs.
Related papers
- Deep Operator Networks for Bayesian Parameter Estimation in PDEs [0.0]
We present a novel framework combining Deep Operator Networks (DeepONets) with Physics-Informed Neural Networks (PINNs) to solve partial differential equations (PDEs)
By integrating data-driven learning with physical constraints, our method achieves robust and accurate solutions across diverse scenarios.
arXiv Detail & Related papers (2025-01-18T07:41:05Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - A local squared Wasserstein-2 method for efficient reconstruction of models with uncertainty [0.0]
We propose a local squared Wasserstein-2 (W_2) method to solve the inverse problem of reconstructing models with uncertain latent variables or parameters.
A key advantage of our approach is that it does not require prior information on the distribution of the latent variables or parameters in the underlying models.
arXiv Detail & Related papers (2024-06-10T22:15:55Z) - Sparse discovery of differential equations based on multi-fidelity
Gaussian process [0.8088384541966945]
Sparse identification of differential equations aims to compute the analytic expressions from the observed data explicitly.
It exhibits sensitivity to the noise in the observed data, particularly for the derivatives computations.
Existing literature predominantly concentrates on single-fidelity (SF) data, which imposes limitations on its applicability.
We present two novel approaches to address these problems from the view of uncertainty quantification.
arXiv Detail & Related papers (2024-01-22T10:38:14Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Semi-Parametric Inference for Doubly Stochastic Spatial Point Processes: An Approximate Penalized Poisson Likelihood Approach [3.085995273374333]
Doubly-stochastic point processes model the occurrence of events over a spatial domain as an inhomogeneous process conditioned on the realization of a random intensity function.
Existing implementations of doubly-stochastic spatial models are computationally demanding, often have limited theoretical guarantee, and/or rely on restrictive assumptions.
arXiv Detail & Related papers (2023-06-11T19:48:39Z) - Score-based Diffusion Models in Function Space [137.70916238028306]
Diffusion models have recently emerged as a powerful framework for generative modeling.
This work introduces a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.