Data-driven Optimization Model for Global Covid-19 Intervention Plans
- URL: http://arxiv.org/abs/2104.07865v1
- Date: Fri, 16 Apr 2021 02:56:36 GMT
- Title: Data-driven Optimization Model for Global Covid-19 Intervention Plans
- Authors: Chang Liu, Akshay Budhkar
- Abstract summary: In the wake of COVID-19, every government huddles to find the best interventions that will reduce the number of infection cases while minimizing the economic impact.
We describe an integer programming approach to prescribe intervention plans that optimize for both the minimal number of daily new cases and economic impact.
- Score: 5.565573622844362
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the wake of COVID-19, every government huddles to find the best
interventions that will reduce the number of infection cases while minimizing
the economic impact. However, with many intervention policies available, how
should one decide which policy is the best course of action? In this work, we
describe an integer programming approach to prescribe intervention plans that
optimizes for both the minimal number of daily new cases and economic impact.
We present a method to estimate the impact of intervention plans on the number
of cases based on historical data. Finally, we demonstrate visualizations and
summaries of our empirical analyses on the performance of our model with
varying parameters compared to two sets of heuristics.
Related papers
- TCPO: Thought-Centric Preference Optimization for Effective Embodied Decision-making [75.29820290660065]
This paper proposes Thought-Centric Preference Optimization ( TCPO) for effective embodied decision-making.<n>It emphasizes the alignment of the model's intermediate reasoning process, mitigating the problem of model degradation.<n>Experiments in the ALFWorld environment demonstrate an average success rate of 26.67%, achieving a 6% improvement over RL4VLM.
arXiv Detail & Related papers (2025-09-10T11:16:21Z) - Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems [61.580419063416734]
A recent stream of structured learning approaches has improved the practical state of the art for a range of optimization problems.
The key idea is to exploit the statistical distribution over instances instead of dealing with instances separately.
In this article, we investigate methods that smooth the risk by perturbing the policy, which eases optimization and improves the generalization error.
arXiv Detail & Related papers (2024-07-24T12:00:30Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Experiment Planning with Function Approximation [49.50254688629728]
We study the problem of experiment planning with function approximation in contextual bandit problems.
We propose two experiment planning strategies compatible with function approximation.
We show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small.
arXiv Detail & Related papers (2024-01-10T14:40:23Z) - Epidemic Control on a Large-Scale-Agent-Based Epidemiology Model using
Deep Deterministic Policy Gradient [0.7244731714427565]
lockdowns, rapid vaccination programs, school closures, and economic stimulus can have positive or unintended negative consequences.
Current research to model and determine an optimal intervention automatically through round-tripping is limited by the simulation objectives, scale (a few thousand individuals), model types that are not suited for intervention studies, and the number of intervention strategies they can explore (discrete vs continuous).
We address these challenges using a Deep Deterministic Policy Gradient (DDPG) based policy optimization framework on a large-scale (100,000 individual) epidemiological agent-based simulation.
arXiv Detail & Related papers (2023-04-10T09:26:07Z) - Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model [50.06663781566795]
We consider a dynamic model with the consumers' preferences as well as price sensitivity varying over time.
We measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance.
Our regret analysis results not only demonstrate optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information.
arXiv Detail & Related papers (2023-03-28T00:23:23Z) - Policy Optimization for Personalized Interventions in Behavioral Health [8.10897203067601]
Behavioral health interventions, delivered through digital platforms, have the potential to significantly improve health outcomes.
We study the problem of optimizing personalized interventions for patients to maximize a long-term outcome.
We present a new approach for this problem that we dub DecompPI, which decomposes the state space for a system of patients to the individual level.
arXiv Detail & Related papers (2023-03-21T21:42:03Z) - Estimation of Optimal Dynamic Treatment Assignment Rules under Policy Constraints [0.0]
We study estimation of an optimal dynamic treatment regime that guides the optimal treatment assignment for each individual at each stage based on their history.
The paper proposes two estimation methods: one solves the treatment assignment problem sequentially through backward induction, and the other solves the entire problem simultaneously across all stages.
arXiv Detail & Related papers (2021-06-09T12:42:53Z) - Machine Learning-Powered Mitigation Policy Optimization in
Epidemiological Models [33.88734751290751]
We propose a new approach for obtaining optimal policy recommendations based on epidemiological models.
We find that such a look-ahead strategy infers non-trivial policies that adhere well to the constraints specified.
arXiv Detail & Related papers (2020-10-16T16:27:17Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and
Policy Assessment using Compartmental Gaussian Processes [111.69190108272133]
coronavirus disease 2019 (COVID-19) global pandemic has led many countries to impose unprecedented lockdown measures.
Data-driven models that predict COVID-19 fatalities under different lockdown policy scenarios are essential.
This paper develops a Bayesian model for predicting the effects of COVID-19 lockdown policies in a global context.
arXiv Detail & Related papers (2020-05-13T18:21:50Z) - Counterfactual Learning of Stochastic Policies with Continuous Actions:
from Models to Offline Evaluation [41.21447375318793]
We introduce a modelling strategy based on a joint kernel embedding of contexts and actions.
We empirically show that the optimization aspect of counterfactual learning is important.
We propose an evaluation protocol for offline policies in real-world logged systems.
arXiv Detail & Related papers (2020-04-22T07:42:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.