Trajectory-oriented optimization of stochastic epidemiological models
- URL: http://arxiv.org/abs/2305.03926v3
- Date: Wed, 13 Sep 2023 20:41:24 GMT
- Title: Trajectory-oriented optimization of stochastic epidemiological models
- Authors: Arindam Fadikar, Mickael Binois, Nicholson Collier, Abby Stevens, Kok
Ben Toh, Jonathan Ozik
- Abstract summary: Epidemiological models must be calibrated to ground truth for downstream tasks.
We propose a class of Gaussian process (GP) surrogates along with an optimization strategy based on Thompson sampling.
This Trajectory Oriented Optimization (TOO) approach produces actual trajectories close to the empirical observations.
- Score: 0.873811641236639
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Epidemiological models must be calibrated to ground truth for downstream
tasks such as producing forward projections or running what-if scenarios. The
meaning of calibration changes in case of a stochastic model since output from
such a model is generally described via an ensemble or a distribution. Each
member of the ensemble is usually mapped to a random number seed (explicitly or
implicitly). With the goal of finding not only the input parameter settings but
also the random seeds that are consistent with the ground truth, we propose a
class of Gaussian process (GP) surrogates along with an optimization strategy
based on Thompson sampling. This Trajectory Oriented Optimization (TOO)
approach produces actual trajectories close to the empirical observations
instead of a set of parameter settings where only the mean simulation behavior
matches with the ground truth.
Related papers
- Polynomial Chaos Expanded Gaussian Process [2.287415292857564]
In complex and unknown processes, global models are initially generated over the entire experimental space.
This study addresses the need for models that effectively represent both global and local experimental spaces.
arXiv Detail & Related papers (2024-05-02T07:11:05Z) - Sampling for Model Predictive Trajectory Planning in Autonomous Driving using Normalizing Flows [1.2972104025246092]
This paper investigates several sampling approaches for trajectory generation.
normalizing flows originating from the field of variational inference are considered.
Learning-based normalizing flow models are trained for a more efficient exploration of the input domain.
arXiv Detail & Related papers (2024-04-15T10:45:12Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Optimizing model-agnostic Random Subspace ensembles [5.680512932725364]
We present a model-agnostic ensemble approach for supervised learning.
The proposed approach alternates between learning an ensemble of models using a parametric version of the Random Subspace approach.
We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets.
arXiv Detail & Related papers (2021-09-07T13:58:23Z) - Convex Latent Effect Logit Model via Sparse and Low-rank Decomposition [2.1915057426589746]
We propose a convexparametric convexparametric formulation for learning logistic regression model (logit) with latent heterogeneous effect on sub-population.
Despite its popularity, the mixed logit approach for learning individual heterogeneity has several downsides.
arXiv Detail & Related papers (2021-08-22T22:23:39Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Transport Gaussian Processes for Regression [0.22843885788439797]
We propose a methodology to construct processes, which include GPs, warped GPs, Student-t processes and several others.
Our approach is inspired by layers-based models, where each proposed layer changes a specific property over the generated process.
We validate the proposed model through experiments with real-world data.
arXiv Detail & Related papers (2020-01-30T17:44:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.