Meaningful uncertainties from deep neural network surrogates of
large-scale numerical simulations
- URL: http://arxiv.org/abs/2010.13749v1
- Date: Mon, 26 Oct 2020 17:39:05 GMT
- Title: Meaningful uncertainties from deep neural network surrogates of
large-scale numerical simulations
- Authors: Gemma J. Anderson, Jim A. Gaffney, Brian K. Spears, Peer-Timo Bremer,
Rushil Anirudh and Jayaraman J. Thiagarajan
- Abstract summary: Deep neural networks (DNNs) can serve as highly-accurate surrogate models.
Prediction uncertainty estimates are crucial for making such comparisons meaningful.
- Score: 34.03414786863526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale numerical simulations are used across many scientific disciplines
to facilitate experimental development and provide insights into underlying
physical processes, but they come with a significant computational cost. Deep
neural networks (DNNs) can serve as highly-accurate surrogate models, with the
capacity to handle diverse datatypes, offering tremendous speed-ups for
prediction and many other downstream tasks. An important use-case for these
surrogates is the comparison between simulations and experiments; prediction
uncertainty estimates are crucial for making such comparisons meaningful, yet
standard DNNs do not provide them. In this work we define the fundamental
requirements for a DNN to be useful for scientific applications, and
demonstrate a general variational inference approach to equip predictions of
scalar and image data from a DNN surrogate model trained on inertial
confinement fusion simulations with calibrated Bayesian uncertainties.
Critically, these uncertainties are interpretable, meaningful and preserve
physics-correlations in the predicted quantities.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Single Model Uncertainty Estimation via Stochastic Data Centering [39.71621297447397]
We are interested in estimating the uncertainties of deep neural networks.
We present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models.
We show that $Delta-$UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks.
arXiv Detail & Related papers (2022-07-14T23:54:54Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Leveraging Stochastic Predictions of Bayesian Neural Networks for Fluid
Simulations [19.961746770975534]
We investigate uncertainty estimation and multimodality via the non-deterministic predictions of Bayesian neural networks (BNNs) in fluid simulations.
We show that BNNs, when used as surrogate models for steady-state fluid flow predictions, provide accurate physical predictions together with sensible estimates of uncertainty.
We experiment with perturbed temporal sequences from Navier-Stokes simulations and evaluate the capabilities of BNNs to capture multimodal evolutions.
arXiv Detail & Related papers (2022-05-02T21:34:52Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Transfer learning suppresses simulation bias in predictive models built
from sparse, multi-modal data [15.587831925516957]
Many problems in science, engineering, and business require making predictions based on very few observations.
To build a robust predictive model, these sparse data may need to be augmented with simulated data, especially when the design space is multidimensional.
We combine recent developments in deep learning to build more robust predictive models from multimodal data.
arXiv Detail & Related papers (2021-04-19T23:28:32Z) - Accurate and Reliable Forecasting using Stochastic Differential
Equations [48.21369419647511]
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
This paper develops SDE-HNN to characterize the interaction between the predictive mean and variance of HNNs for accurate and reliable regression.
Experiments on the challenging datasets show that our method significantly outperforms the state-of-the-art baselines in terms of both predictive performance and uncertainty quantification.
arXiv Detail & Related papers (2021-03-28T04:18:11Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.