Designing Accurate Emulators for Scientific Processes using
Calibration-Driven Deep Models
- URL: http://arxiv.org/abs/2005.02328v1
- Date: Tue, 5 May 2020 16:54:11 GMT
- Title: Designing Accurate Emulators for Scientific Processes using
Calibration-Driven Deep Models
- Authors: Jayaraman J. Thiagarajan, Bindya Venkatesh, Rushil Anirudh, Peer-Timo
Bremer, Jim Gaffney, Gemma Anderson, Brian Spears
- Abstract summary: Learn-by-Calibrating (LbC) is a novel deep learning approach for designing emulators in scientific applications.
We show that LbC provides significant improvements in generalization error over widely-adopted loss function choices.
LbC achieves high-quality emulators even in small data regimes and more importantly, recovers the inherent noise structure without any explicit priors.
- Score: 33.935755695805724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictive models that accurately emulate complex scientific processes can
achieve exponential speed-ups over numerical simulators or experiments, and at
the same time provide surrogates for improving the subsequent analysis.
Consequently, there is a recent surge in utilizing modern machine learning (ML)
methods, such as deep neural networks, to build data-driven emulators. While
the majority of existing efforts has focused on tailoring off-the-shelf ML
solutions to better suit the scientific problem at hand, we study an often
overlooked, yet important, problem of choosing loss functions to measure the
discrepancy between observed data and the predictions from a model. Due to lack
of better priors on the expected residual structure, in practice, simple
choices such as the mean squared error and the mean absolute error are made.
However, the inherent symmetric noise assumption made by these loss functions
makes them inappropriate in cases where the data is heterogeneous or when the
noise distribution is asymmetric. We propose Learn-by-Calibrating (LbC), a
novel deep learning approach based on interval calibration for designing
emulators in scientific applications, that are effective even with
heterogeneous data and are robust to outliers. Using a large suite of
use-cases, we show that LbC provides significant improvements in generalization
error over widely-adopted loss function choices, achieves high-quality
emulators even in small data regimes and more importantly, recovers the
inherent noise structure without any explicit priors.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Improved Long Short-Term Memory-based Wastewater Treatment Simulators for Deep Reinforcement Learning [0.0]
We implement two methods to improve the trained models for wastewater treatment data.
The experimental results showed that implementing these methods can improve the behavior of simulators in terms of Dynamic Time Warping throughout a year.
arXiv Detail & Related papers (2024-03-22T10:20:09Z) - PETScML: Second-order solvers for training regression problems in Scientific Machine Learning [0.22499166814992438]
In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis.
We introduce a software built on top of the Portable and Extensible Toolkit for Scientific computation to bridge the gap between deep-learning software and conventional machine-learning techniques.
arXiv Detail & Related papers (2024-03-18T18:59:42Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Fast emulation of density functional theory simulations using
approximate Gaussian processes [0.6445605125467573]
A second statistical model that predicts the simulation output can be used in lieu of the full simulation during model fitting.
We use the emulators to calibrate, in a Bayesian manner, the density functional theory (DFT) model parameters using observed data.
The utility of these DFT models is to make predictions, based on observed data, about the properties of experimentally unobserved nuclides.
arXiv Detail & Related papers (2022-08-24T05:09:36Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Transfer learning suppresses simulation bias in predictive models built
from sparse, multi-modal data [15.587831925516957]
Many problems in science, engineering, and business require making predictions based on very few observations.
To build a robust predictive model, these sparse data may need to be augmented with simulated data, especially when the design space is multidimensional.
We combine recent developments in deep learning to build more robust predictive models from multimodal data.
arXiv Detail & Related papers (2021-04-19T23:28:32Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.