Decision-Aware Learning for Optimizing Health Supply Chains
- URL: http://arxiv.org/abs/2211.08507v1
- Date: Tue, 15 Nov 2022 21:03:52 GMT
- Title: Decision-Aware Learning for Optimizing Health Supply Chains
- Authors: Tsai-Hsuan Chung, Vahid Rostami, Hamsa Bastani, Osbert Bastani
- Abstract summary: We study the problem of allocating limited supply of medical resources in developing countries, in particular, Sierra Leone.
We address this problem by combining machine learning (to predict demand) with optimization (to optimize allocations).
We propose a decision-aware learning algorithm that uses a novel Taylor expansion of the optimal decision loss to derive the machine learning loss.
- Score: 19.167762972321523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of allocating limited supply of medical resources in
developing countries, in particular, Sierra Leone. We address this problem by
combining machine learning (to predict demand) with optimization (to optimize
allocations). A key challenge is the need to align the loss function used to
train the machine learning model with the decision loss associated with the
downstream optimization problem. Traditional solutions have limited flexibility
in the model architecture and scale poorly to large datasets. We propose a
decision-aware learning algorithm that uses a novel Taylor expansion of the
optimal decision loss to derive the machine learning loss. Importantly, our
approach only requires a simple re-weighting of the training data, ensuring it
is both flexible and scalable, e.g., we incorporate it into a random forest
trained using a multitask learning framework. We apply our framework to
optimize the distribution of essential medicines in collaboration with
policymakers in Sierra Leone; highly uncertain demand and limited budgets
currently result in excessive unmet demand. Out-of-sample results demonstrate
that our end-to-end approach can significantly reduce unmet demand across 1040
health facilities throughout Sierra Leone.
Related papers
- Can Learned Optimization Make Reinforcement Learning Less Difficult? [70.5036361852812]
We consider whether learned optimization can help overcome reinforcement learning difficulties.
Our method, Learned Optimization for Plasticity, Exploration and Non-stationarity (OPEN), meta-learns an update rule whose input features and output structure are informed by previously proposed to these difficulties.
arXiv Detail & Related papers (2024-07-09T17:55:23Z) - Memory-Enhanced Neural Solvers for Efficient Adaptation in Combinatorial Optimization [6.713974813995327]
We present MEMENTO, an approach that leverages memory to improve the adaptation of neural solvers at time.
We successfully train all RL auto-regressive solvers on large instances, and show that MEMENTO can scale and is data-efficient.
Overall, MEMENTO enables to push the state-of-the-art on 11 out of 12 evaluated tasks.
arXiv Detail & Related papers (2024-06-24T08:18:19Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Optimization Over Trained Neural Networks: Taking a Relaxing Walk [4.517039147450688]
We propose a more scalable solver based on exploring global and local linear relaxations of the neural network model.
Our solver is competitive with a state-of-the-art MILP solver and the prior while producing better solutions with increases in input, depth, and number of neurons.
arXiv Detail & Related papers (2024-01-07T11:15:00Z) - Optimizing Inventory Routing: A Decision-Focused Learning Approach using
Neural Networks [0.0]
We formulate and propose a decision-focused learning-based approach to solving real-world IRPs.
This approach directly integrates inventory prediction and routing optimization within an end-to-end system potentially ensuring a robust supply chain strategy.
arXiv Detail & Related papers (2023-11-02T04:05:28Z) - ZooPFL: Exploring Black-box Foundation Models for Personalized Federated
Learning [95.64041188351393]
This paper endeavors to solve both the challenges of limited resources and personalization.
We propose a method named ZOOPFL that uses Zeroth-Order Optimization for Personalized Federated Learning.
To reduce the computation costs and enhance personalization, we propose input surgery to incorporate an auto-encoder with low-dimensional and client-specific embeddings.
arXiv Detail & Related papers (2023-10-08T12:26:13Z) - Multi-Resolution Active Learning of Fourier Neural Operators [33.63483360957646]
We propose Multi-Resolution Active learning of FNO (MRA-FNO), which can dynamically select the input functions and resolutions to lower the data cost as much as possible.
Specifically, we propose a probabilistic multi-resolution FNO and use ensemble Monte-Carlo to develop an effective posterior inference algorithm.
We have shown the advantage of our method in several benchmark operator learning tasks.
arXiv Detail & Related papers (2023-09-29T04:41:27Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.