Learning to Simulate: Generative Metamodeling via Quantile Regression
- URL: http://arxiv.org/abs/2311.17797v2
- Date: Mon, 09 Dec 2024 09:12:00 GMT
- Title: Learning to Simulate: Generative Metamodeling via Quantile Regression
- Authors: L. Jeff Hong, Yanxi Hou, Qingkai Zhang, Xiaowei Zhang,
- Abstract summary: Traditional metamodeling techniques learn relationships between simulator inputs and a single output summary statistic.
We propose a new concept: generative metamodeling.
Generative metamodels enable rapid generation of numerous random outputs upon input specification.
- Score: 2.0613075946076904
- License:
- Abstract: Stochastic simulation models effectively capture complex system dynamics but are often too slow for real-time decision-making. Traditional metamodeling techniques learn relationships between simulator inputs and a single output summary statistic, such as the mean or median. These techniques enable real-time predictions without additional simulations. However, they require prior selection of one appropriate output summary statistic, limiting their flexibility in practical applications. We propose a new concept: generative metamodeling. It aims to construct a "fast simulator of the simulator," generating random outputs significantly faster than the original simulator while preserving approximately equal conditional distributions. Generative metamodels enable rapid generation of numerous random outputs upon input specification, facilitating immediate computation of any summary statistic for real-time decision-making. We introduce a new algorithm, quantile-regression-based generative metamodeling (QRGMM), and establish its distributional convergence and convergence rate. Extensive numerical experiments demonstrate QRGMM's efficacy compared to other state-of-the-art generative algorithms in practical real-time decision-making scenarios.
Related papers
- Reliability analysis for non-deterministic limit-states using stochastic emulators [0.0]
This paper introduces reliability analysis for models and addresses it by using suitable surrogate models to lower its typically high computational cost.
Specifically, we focus on the recently introduced generalized models and chaos expansions.
We validate our methodology through three case studies. First, using an analytical function with a closed-form solution, we demonstrate that the emulators converge to the correct solution.
arXiv Detail & Related papers (2024-12-18T11:08:56Z) - Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study [61.64685376882383]
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
This paper investigates the robustness of existing CLTR models in complex and diverse situations.
We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation.
arXiv Detail & Related papers (2024-04-04T10:54:38Z) - GenFormer: A Deep-Learning-Based Approach for Generating Multivariate
Stochastic Processes [5.679243827959339]
We propose a Transformer-based deep learning model that learns a mapping between a Markov state sequence and time series values.
The GenFormer model is applied to simulate synthetic wind speed data at various stations in Florida to calculate exceedance probabilities for risk management.
arXiv Detail & Related papers (2024-02-03T03:50:18Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Abstraction of Markov Population Dynamics via Generative Adversarial
Nets [2.1485350418225244]
A strategy to reduce computational load is to abstract the population model, replacing it with a simpler model, faster to simulate.
Here we pursue this idea, building on previous works and constructing a generator capable of producing trajectories in continuous space and discrete time.
This generator is learned automatically from simulations of the original model in a Generative Adversarial setting.
arXiv Detail & Related papers (2021-06-24T12:57:49Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Recurrent convolutional neural network for the surrogate modeling of
subsurface flow simulation [0.0]
We propose to combine SegNet with ConvLSTM layers for the surrogate modeling of numerical flow simulation.
Results show that the proposed method improves the performance of SegNet based surrogate model remarkably when the output of the simulation is time series data.
arXiv Detail & Related papers (2020-10-08T09:34:48Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Real-Time Regression with Dividing Local Gaussian Processes [62.01822866877782]
Local Gaussian processes are a novel, computationally efficient modeling approach based on Gaussian process regression.
Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice.
A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.
arXiv Detail & Related papers (2020-06-16T18:43:31Z) - Using Machine Learning to Emulate Agent-Based Simulations [0.0]
We evaluate the performance of multiple machine-learning methods as statistical emulators for use in the analysis of agent-based models (ABMs)
We propose that agent-based modelling would benefit from using machine-learning methods for emulation, as this can facilitate more robust sensitivity analyses for the models.
arXiv Detail & Related papers (2020-05-05T11:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.