Evaluating Uncertainty Quantification approaches for Neural PDEs in
scientific applications
- URL: http://arxiv.org/abs/2311.04457v1
- Date: Wed, 8 Nov 2023 04:52:20 GMT
- Title: Evaluating Uncertainty Quantification approaches for Neural PDEs in
scientific applications
- Authors: Vardhan Dongre, Gurpreet Singh Hora
- Abstract summary: This work evaluates various Uncertainty Quantification (UQ) approaches for both Forward and Inverse Problems in scientific applications.
Specifically, we investigate the effectiveness of Bayesian methods, such as Hamiltonian Monte Carlo (HMC) and Monte-Carlo Dropout (MCD)
Our results indicate that Neural PDEs can effectively reconstruct flow systems and predict the associated unknown parameters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The accessibility of spatially distributed data, enabled by affordable
sensors, field, and numerical experiments, has facilitated the development of
data-driven solutions for scientific problems, including climate change,
weather prediction, and urban planning. Neural Partial Differential Equations
(Neural PDEs), which combine deep learning (DL) techniques with domain
expertise (e.g., governing equations) for parameterization, have proven to be
effective in capturing valuable correlations within spatiotemporal datasets.
However, sparse and noisy measurements coupled with modeling approximation
introduce aleatoric and epistemic uncertainties. Therefore, quantifying
uncertainties propagated from model inputs to outputs remains a challenge and
an essential goal for establishing the trustworthiness of Neural PDEs. This
work evaluates various Uncertainty Quantification (UQ) approaches for both
Forward and Inverse Problems in scientific applications. Specifically, we
investigate the effectiveness of Bayesian methods, such as Hamiltonian Monte
Carlo (HMC) and Monte-Carlo Dropout (MCD), and a more conventional approach,
Deep Ensembles (DE). To illustrate their performance, we take two canonical
PDEs: Burger's equation and the Navier-Stokes equation. Our results indicate
that Neural PDEs can effectively reconstruct flow systems and predict the
associated unknown parameters. However, it is noteworthy that the results
derived from Bayesian methods, based on our observations, tend to display a
higher degree of certainty in their predictions as compared to those obtained
using the DE. This elevated certainty in predictions suggests that Bayesian
techniques might underestimate the true underlying uncertainty, thereby
appearing more confident in their predictions than the DE approach.
Related papers
- Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Leveraging viscous Hamilton-Jacobi PDEs for uncertainty quantification in scientific machine learning [1.8175282137722093]
Uncertainty (UQ) in scientific machine learning (SciML) combines the powerful predictive power of SciML with methods for quantifying the reliability of the learned models.
We provide a new interpretation for UQ problems by establishing a new theoretical connection between some Bayesian inference problems arising in SciML and viscous Hamilton-Jacobi partial differential equations (HJ PDEs)
We develop a new Riccati-based methodology that provides computational advantages when continuously updating the model predictions.
arXiv Detail & Related papers (2024-04-12T20:54:01Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv Detail & Related papers (2021-06-07T18:31:47Z) - Physics-Guided Discovery of Highly Nonlinear Parametric Partial
Differential Equations [29.181177365252925]
Partial differential equations (PDEs) that fit scientific data can represent physical laws with explainable mechanisms.
We propose a novel physics-guided learning method, which encodes observation knowledge and incorporates basic physical principles and laws.
Experiments show that our proposed method is more robust against data noise, and can reduce the estimation error by a large margin.
arXiv Detail & Related papers (2021-06-02T11:24:49Z) - Calibration and Uncertainty Quantification of Bayesian Convolutional
Neural Networks for Geophysical Applications [0.0]
It is common to incorporate the uncertainty of predictions such subsurface models should provide calibrated probabilities and the associated uncertainties in their predictions.
It has been shown that popular Deep Learning-based models are often miscalibrated, and due to their deterministic nature, provide no means to interpret the uncertainty of their predictions.
We compare three different approaches obtaining probabilistic models based on convolutional neural networks in a Bayesian formalism.
arXiv Detail & Related papers (2021-05-25T17:54:23Z) - Accurate and Reliable Forecasting using Stochastic Differential
Equations [48.21369419647511]
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
This paper develops SDE-HNN to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression.
Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification.
arXiv Detail & Related papers (2021-03-28T04:18:11Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.