Using Bayesian deep learning approaches for uncertainty-aware building
energy surrogate models
- URL: http://arxiv.org/abs/2010.03029v1
- Date: Mon, 5 Oct 2020 15:04:18 GMT
- Title: Using Bayesian deep learning approaches for uncertainty-aware building
energy surrogate models
- Authors: Paul Westermann and Ralph Evins
- Abstract summary: Machine learning surrogate models are trained to emulate slow, high-fidelity engineering simulation models.
Deep learning models exist that follow the Bayesian paradigm.
We show that errors can be reduced by up to 30% when the 10% of samples with the highest uncertainty are transferred to the high-fidelity model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast machine learning-based surrogate models are trained to emulate slow,
high-fidelity engineering simulation models to accelerate engineering design
tasks. This introduces uncertainty as the surrogate is only an approximation of
the original model.
Bayesian methods can quantify that uncertainty, and deep learning models
exist that follow the Bayesian paradigm. These models, namely Bayesian neural
networks and Gaussian process models, enable us to give predictions together
with an estimate of the model's uncertainty. As a result we can derive
uncertainty-aware surrogate models that can automatically suspect unseen design
samples that cause large emulation errors. For these samples, the high-fidelity
model can be queried instead. This outlines how the Bayesian paradigm allows us
to hybridize fast, but approximate, and slow, but accurate models.
In this paper, we train two types of Bayesian models, dropout neural networks
and stochastic variational Gaussian Process models, to emulate a complex high
dimensional building energy performance simulation problem. The surrogate model
processes 35 building design parameters (inputs) to estimate 12 different
performance metrics (outputs). We benchmark both approaches, prove their
accuracy to be competitive, and show that errors can be reduced by up to 30%
when the 10% of samples with the highest uncertainty are transferred to the
high-fidelity model.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Energy-Based Diffusion Language Models for Text Generation [126.23425882687195]
Energy-based Diffusion Language Model (EDLM) is an energy-based model operating at the full sequence level for each diffusion step.
Our framework offers a 1.3$times$ sampling speedup over existing diffusion models.
arXiv Detail & Related papers (2024-10-28T17:25:56Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Uncertainty-aware multi-fidelity surrogate modeling with noisy data [0.0]
In real-world applications, uncertainty is present in both high- and low-fidelity models due to measurement or numerical noise.
This paper introduces a comprehensive framework for multi-fidelity surrogate modeling that handles noise-contaminated data.
The proposed framework offers a natural approach to combining physical experiments and computational models.
arXiv Detail & Related papers (2024-01-12T08:37:41Z) - Uncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models [26.178192913986344]
We make a first attempt to use denoising diffusion probabilistic models (DDPMs) to train an uncertainty-aware surrogate model for turbulence simulations.
Our results show DDPMs can successfully capture the whole distribution of solutions and, as a consequence, accurately estimate the uncertainty of the simulations.
We also evaluate an emerging generative modeling variant, flow matching, in comparison to regular diffusion models.
arXiv Detail & Related papers (2023-12-08T19:04:17Z) - Bayesian score calibration for approximate models [0.0]
We propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification.
Our approach requires only a (fixed) small number of complex model simulations and is numerically stable.
arXiv Detail & Related papers (2022-11-10T06:00:58Z) - Robust DNN Surrogate Models with Uncertainty Quantification via
Adversarial Training [17.981250443856897]
surrogate models have been used to emulate mathematical simulators for physical or biological processes.
Deep Neural Network (DNN) surrogate models have gained popularity for their hard-to-match emulation accuracy.
In this paper, we show the severity of this issue through empirical studies and hypothesis testing.
arXiv Detail & Related papers (2022-11-10T05:09:39Z) - Hybrid Machine Learning Modeling of Engineering Systems -- A
Probabilistic Perspective Tested on a Multiphase Flow Modeling Case Study [0.0]
We propose a hybrid modeling machine learning framework that allows tuning first principles models to process conditions.
Our approach not only estimates the expected values of the first principles model parameters but also quantifies the uncertainty of these estimates.
In the simulation results, we show how uncertainty estimates of the resulting hybrid models can be used to make better operation decisions.
arXiv Detail & Related papers (2022-05-18T20:15:25Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.