Machine Learning and Constraint Programming for Efficient Healthcare Scheduling
- URL: http://arxiv.org/abs/2409.07547v1
- Date: Wed, 11 Sep 2024 18:09:25 GMT
- Title: Machine Learning and Constraint Programming for Efficient Healthcare Scheduling
- Authors: Aymen Ben Said, Malek Mouhoub,
- Abstract summary: We tackle the Nurse Scheduling Problem (NSP)
In the implicit solving approach, we rely on Machine Learning methods using historical data to learn and generate new solutions through the constraints and objectives that may be embedded in the learned patterns.
To compensate for the uncertainty related to the implicit approach given that the constraints and objectives may not be concretely visible in the produced solutions, we propose an explicit approach where we first model the NSP using the Constraint Satisfaction Problem framework.
Since our implicit approach may not guarantee the feasibility or optimality of the generated solution, we propose a data-driven approach to passively learn the NSP as a constraint
- Score: 0.8287206589886879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving combinatorial optimization problems involve satisfying a set of hard constraints while optimizing some objectives. In this context, exact or approximate methods can be used. While exact methods guarantee the optimal solution, they often come with an exponential running time as opposed to approximate methods that trade the solutions quality for a better running time. In this context, we tackle the Nurse Scheduling Problem (NSP). The NSP consist in assigning nurses to daily shifts within a planning horizon such that workload constraints are satisfied while hospitals costs and nurses preferences are optimized. To solve the NSP, we propose implicit and explicit approaches. In the implicit solving approach, we rely on Machine Learning methods using historical data to learn and generate new solutions through the constraints and objectives that may be embedded in the learned patterns. To quantify the quality of using our implicit approach in capturing the embedded constraints and objectives, we rely on the Frobenius Norm, a quality measure used to compute the average error between the generated solutions and historical data. To compensate for the uncertainty related to the implicit approach given that the constraints and objectives may not be concretely visible in the produced solutions, we propose an alternative explicit approach where we first model the NSP using the Constraint Satisfaction Problem (CSP) framework. Then we develop Stochastic Local Search methods and a new Branch and Bound algorithm enhanced with constraint propagation techniques and variables/values ordering heuristics. Since our implicit approach may not guarantee the feasibility or optimality of the generated solution, we propose a data-driven approach to passively learn the NSP as a constraint network. The learned constraint network, formulated as a CSP, will then be solved using the methods we listed earlier.
Related papers
- Optimization Proxies using Limited Labeled Data and Training Time -- A Semi-Supervised Bayesian Neural Network Approach [2.943640991628177]
Constrained optimization problems arise in various engineering system operations such as inventory management electric power grids.
This work introduces a learning scheme using Bayesian Networks (BNNs) to solve constrained optimization problems under limited data and restricted model times.
We show that the proposed learning method outperforms conventional BNN and deep neural network (DNN) architectures.
arXiv Detail & Related papers (2024-10-04T02:10:20Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Threshold-aware Learning to Generate Feasible Solutions for Mixed
Integer Programs [5.28005598366543]
Neural diving (ND) is one of the learning-based approaches to generating partial discrete variable assignments in Mixed Programs (MIP)
We introduce a post-hoc method and a learning-based approach for optimizing the coverage.
Experimental results demonstrate that learning a deep neural network to estimate the coverage for finding high-quality feasible solutions achieves state-of-the-art performance in NeurIPS ML4CO datasets.
arXiv Detail & Related papers (2023-08-01T07:03:16Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - On data-driven chance constraint learning for mixed-integer optimization
problems [0.0]
We develop a Chance Constraint Learning (CCL) methodology with a focus on mixed-integer linear optimization problems.
CCL makes use of linearizable machine learning models to estimate conditional quantiles of the learned variables.
An open-access software has been developed to be used by practitioners.
arXiv Detail & Related papers (2022-07-08T11:54:39Z) - Solving the capacitated vehicle routing problem with timing windows
using rollouts and MAX-SAT [4.873362301533824]
Vehicle routing is a well known class of NP-hard optimisation problems.
Recent work in reinforcement learning has been a promising alternative approach.
This paper proposes a hybrid approach that combines reinforcement learning, policy rollouts, and a satisfiability.
arXiv Detail & Related papers (2022-06-14T06:27:09Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Teaching the Old Dog New Tricks: Supervised Learning with Constraints [18.88930622054883]
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems.
Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output.
Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver.
arXiv Detail & Related papers (2020-02-25T09:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.