Computer Model Calibration with Time Series Data using Deep Learning and
Quantile Regression
- URL: http://arxiv.org/abs/2008.13066v2
- Date: Tue, 8 Sep 2020 06:10:09 GMT
- Title: Computer Model Calibration with Time Series Data using Deep Learning and
Quantile Regression
- Authors: Saumya Bhatnagar, Won Chang, Seonjin Kim Jiali Wang
- Abstract summary: The existing standard calibration framework suffers from inferential issues when the model output and observational data are high-dimensional dependent data.
We propose a new calibration framework based on a deep neural network (DNN) with long-short term memory layers that directly emulates the inverse relationship between the model output and input parameters.
- Score: 1.6758573326215689
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer models play a key role in many scientific and engineering problems.
One major source of uncertainty in computer model experiment is input parameter
uncertainty. Computer model calibration is a formal statistical procedure to
infer input parameters by combining information from model runs and
observational data. The existing standard calibration framework suffers from
inferential issues when the model output and observational data are
high-dimensional dependent data such as large time series due to the difficulty
in building an emulator and the non-identifiability between effects from input
parameters and data-model discrepancy. To overcome these challenges we propose
a new calibration framework based on a deep neural network (DNN) with
long-short term memory layers that directly emulates the inverse relationship
between the model output and input parameters. Adopting the 'learning with
noise' idea we train our DNN model to filter out the effects from data model
discrepancy on input parameter inference. We also formulate a new way to
construct interval predictions for DNN using quantile regression to quantify
the uncertainty in input parameter estimates. Through a simulation study and
real data application with WRF-hydro model we show that our approach can yield
accurate point estimates and well calibrated interval estimates for input
parameters.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Deterministic and statistical calibration of constitutive models from full-field data with parametric physics-informed neural networks [36.136619420474766]
parametric physics-informed neural networks (PINNs) for model calibration from full-field displacement data are investigated.
Due to the fast evaluation of PINNs, calibration can be performed in near real-time.
arXiv Detail & Related papers (2024-05-28T16:02:11Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Post-training Model Quantization Using GANs for Synthetic Data
Generation [57.40733249681334]
We investigate the use of synthetic data as a substitute for the calibration with real data for the quantization method.
We compare the performance of models quantized using data generated by StyleGAN2-ADA and our pre-trained DiStyleGAN, with quantization using real data and an alternative data generation method based on fractal images.
arXiv Detail & Related papers (2023-05-10T11:10:09Z) - Neural parameter calibration for large-scale multi-agent models [0.7734726150561089]
We present a method to retrieve accurate probability densities for parameters using neural equations.
The two combined create a powerful tool that can quickly estimate densities on model parameters, even for very large systems.
arXiv Detail & Related papers (2022-09-27T17:36:26Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Training Structured Mechanical Models by Minimizing Discrete
Euler-Lagrange Residual [36.52097893036073]
Structured Mechanical Models (SMMs) are a data-efficient black-box parameterization of mechanical systems.
We propose a methodology for fitting SMMs to data by minimizing the discrete Euler-Lagrange residual.
Experiments show that our methodology learns models that are better in accuracy to those of the conventional schemes for fitting SMMs.
arXiv Detail & Related papers (2021-05-05T00:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.