Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution
- URL: http://arxiv.org/abs/2402.08383v1
- Date: Tue, 13 Feb 2024 11:22:59 GMT
- Title: Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution
- Authors: Tailin Wu, Willie Neiswanger, Hongtao Zheng, Stefano Ermon, Jure
Leskovec
- Abstract summary: We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
- Score: 110.99891169486366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based surrogate models have demonstrated remarkable advantages
over classical solvers in terms of speed, often achieving speedups of 10 to
1000 times over traditional partial differential equation (PDE) solvers.
However, a significant challenge hindering their widespread adoption in both
scientific and industrial domains is the lack of understanding about their
prediction uncertainties, particularly in scenarios that involve critical
decision making. To address this limitation, we propose a method that
integrates efficient and precise uncertainty quantification into a deep
learning-based surrogate model. Our method, termed Latent Evolution of PDEs
with Uncertainty Quantification (LE-PDE-UQ), endows deep learning-based
surrogate models with robust and efficient uncertainty quantification
capabilities for both forward and inverse problems. LE-PDE-UQ leverages latent
vectors within a latent space to evolve both the system's state and its
corresponding uncertainty estimation. The latent vectors are decoded to provide
predictions for the system's state as well as estimates of its uncertainty. In
extensive experiments, we demonstrate the accurate uncertainty quantification
performance of our approach, surpassing that of strong baselines including deep
ensembles, Bayesian neural network layers, and dropout. Our method excels at
propagating uncertainty over extended auto-regressive rollouts, making it
suitable for scenarios involving long-term predictions. Our code is available
at: https://github.com/AI4Science-WestlakeU/le-pde-uq.
Related papers
- Uncertainty Calibration with Energy Based Instance-wise Scaling in the Wild Dataset [23.155946032377052]
We introduce a novel instance-wise calibration method based on an energy model.
Our method incorporates energy scores instead of softmax confidence scores, allowing for adaptive consideration of uncertainty.
In experiments, we show that the proposed method consistently maintains robust performance across the spectrum.
arXiv Detail & Related papers (2024-07-17T06:14:55Z) - Evaluating Uncertainty Quantification approaches for Neural PDEs in
scientific applications [0.0]
This work evaluates various Uncertainty Quantification (UQ) approaches for both Forward and Inverse Problems in scientific applications.
Specifically, we investigate the effectiveness of Bayesian methods, such as Hamiltonian Monte Carlo (HMC) and Monte-Carlo Dropout (MCD)
Our results indicate that Neural PDEs can effectively reconstruct flow systems and predict the associated unknown parameters.
arXiv Detail & Related papers (2023-11-08T04:52:20Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Neural State-Space Models: Empirical Evaluation of Uncertainty
Quantification [0.0]
This paper presents preliminary results on uncertainty quantification for system identification with neural state-space models.
We frame the learning problem in a Bayesian probabilistic setting and obtain posterior distributions for the neural network's weights and outputs.
Based on the posterior, we construct credible intervals on the outputs and define a surprise index which can effectively diagnose usage of the model in a potentially dangerous out-of-distribution regime.
arXiv Detail & Related papers (2023-04-13T08:57:33Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Accurate and Reliable Forecasting using Stochastic Differential
Equations [48.21369419647511]
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
This paper develops SDE-HNN to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression.
Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification.
arXiv Detail & Related papers (2021-03-28T04:18:11Z) - SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates [45.43024126674237]
Uncertainty quantification is a fundamental yet unsolved problem for deep learning.
Bayesian framework provides principled way of uncertainty estimation but is often not scalable to modern deep neural nets (DNNs)
We propose a new method for quantifying uncertainties of DNNs from a dynamical system perspective.
arXiv Detail & Related papers (2020-08-24T16:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.