Using Bayesian deep learning approaches for uncertainty-aware building
energy surrogate models
- URL: http://arxiv.org/abs/2010.03029v1
- Date: Mon, 5 Oct 2020 15:04:18 GMT
- Title: Using Bayesian deep learning approaches for uncertainty-aware building
energy surrogate models
- Authors: Paul Westermann and Ralph Evins
- Abstract summary: Machine learning surrogate models are trained to emulate slow, high-fidelity engineering simulation models.
Deep learning models exist that follow the Bayesian paradigm.
We show that errors can be reduced by up to 30% when the 10% of samples with the highest uncertainty are transferred to the high-fidelity model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast machine learning-based surrogate models are trained to emulate slow,
high-fidelity engineering simulation models to accelerate engineering design
tasks. This introduces uncertainty as the surrogate is only an approximation of
the original model.
Bayesian methods can quantify that uncertainty, and deep learning models
exist that follow the Bayesian paradigm. These models, namely Bayesian neural
networks and Gaussian process models, enable us to give predictions together
with an estimate of the model's uncertainty. As a result we can derive
uncertainty-aware surrogate models that can automatically suspect unseen design
samples that cause large emulation errors. For these samples, the high-fidelity
model can be queried instead. This outlines how the Bayesian paradigm allows us
to hybridize fast, but approximate, and slow, but accurate models.
In this paper, we train two types of Bayesian models, dropout neural networks
and stochastic variational Gaussian Process models, to emulate a complex high
dimensional building energy performance simulation problem. The surrogate model
processes 35 building design parameters (inputs) to estimate 12 different
performance metrics (outputs). We benchmark both approaches, prove their
accuracy to be competitive, and show that errors can be reduced by up to 30%
when the 10% of samples with the highest uncertainty are transferred to the
high-fidelity model.
Related papers
- General multi-fidelity surrogate models: Framework and active learning
strategies for efficient rare event simulation [1.708673732699217]
Estimating the probability of failure for complex real-world systems is often prohibitively expensive.
This paper presents a robust multi-fidelity surrogate modeling strategy.
It is shown to be highly accurate while drastically reducing the number of high-fidelity model calls.
arXiv Detail & Related papers (2022-12-07T00:03:21Z) - Bayesian score calibration for approximate models [0.0]
We propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification.
Our approach requires only a (fixed) small number of complex model simulations and is numerically stable.
arXiv Detail & Related papers (2022-11-10T06:00:58Z) - Robust DNN Surrogate Models with Uncertainty Quantification via
Adversarial Training [17.981250443856897]
surrogate models have been used to emulate mathematical simulators for physical or biological processes.
Deep Neural Network (DNN) surrogate models have gained popularity for their hard-to-match emulation accuracy.
In this paper, we show the severity of this issue through empirical studies and hypothesis testing.
arXiv Detail & Related papers (2022-11-10T05:09:39Z) - Hybrid Machine Learning Modeling of Engineering Systems -- A
Probabilistic Perspective Tested on a Multiphase Flow Modeling Case Study [0.0]
We propose a hybrid modeling machine learning framework that allows tuning first principles models to process conditions.
Our approach not only estimates the expected values of the first principles model parameters but also quantifies the uncertainty of these estimates.
In the simulation results, we show how uncertainty estimates of the resulting hybrid models can be used to make better operation decisions.
arXiv Detail & Related papers (2022-05-18T20:15:25Z) - Stochastic Parameterizations: Better Modelling of Temporal Correlations
using Probabilistic Machine Learning [1.5293427903448025]
We show that by using a physically-informed recurrent neural network within a probabilistic framework, our model for the 96 atmospheric simulation is competitive.
This is due to a superior ability to model temporal correlations compared to standard first-order autoregressive schemes.
We evaluate across a number of metrics from the literature, but also discuss how the probabilistic metric of likelihood may be a unifying choice for future climate models.
arXiv Detail & Related papers (2022-03-28T14:51:42Z) - Improving Non-autoregressive Generation with Mixup Training [51.61038444990301]
We present a non-autoregressive generation model based on pre-trained transformer models.
We propose a simple and effective iterative training method called MIx Source and pseudo Target.
Our experiments on three generation benchmarks including question generation, summarization and paraphrase generation, show that the proposed framework achieves the new state-of-the-art results.
arXiv Detail & Related papers (2021-10-21T13:04:21Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [55.28436972267793]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.