Learning From Scenarios for Stochastic Repairable Scheduling
- URL: http://arxiv.org/abs/2312.03492v2
- Date: Thu, 15 Aug 2024 13:21:26 GMT
- Title: Learning From Scenarios for Stochastic Repairable Scheduling
- Authors: Kim van den Houten, David M. J. Tax, Esteban Freydell, Mathijs de Weerdt,
- Abstract summary: We show how decision-focused learning techniques based on smoothing can be adapted to a scheduling problem.
We include an experimental evaluation to investigate in which situations decision-focused learning outperforms the state of the art for such situations: scenario-based optimization.
- Score: 3.9948520633731026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When optimizing problems with uncertain parameter values in a linear objective, decision-focused learning enables end-to-end learning of these values. We are interested in a stochastic scheduling problem, in which processing times are uncertain, which brings uncertain values in the constraints, and thus repair of an initial schedule may be needed. Historical realizations of the stochastic processing times are available. We show how existing decision-focused learning techniques based on stochastic smoothing can be adapted to this scheduling problem. We include an extensive experimental evaluation to investigate in which situations decision-focused learning outperforms the state of the art for such situations: scenario-based stochastic optimization.
Related papers
- Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization [0.0]
We present an integrated learning and optimization procedure that yields the best approximation of an unknown situation.
Numerical results conducted with inventory problems from the literature as well as a bike-sharing problem with real data demonstrate that the proposed approach performs well.
arXiv Detail & Related papers (2024-11-05T21:54:50Z) - Embedding generalization within the learning dynamics: An approach based-on sample path large deviation theory [0.0]
We consider an empirical risk perturbation based learning problem that exploits methods from continuous-time perspective.
We provide an estimate in the small noise limit based on the Freidlin-Wentzell theory of large deviations.
We also present a computational algorithm that solves the corresponding variational problem leading to an optimal point estimates.
arXiv Detail & Related papers (2024-08-04T23:31:35Z) - Score Function Gradient Estimation to Widen the Applicability of Decision-Focused Learning [17.962860438133312]
Decision-focused learning (DFL) paradigm overcomes limitation by training to directly minimize a task loss, e.g. regret.
We propose an alternative method that makes no such assumptions, it combines smoothing with score function estimation which works on any task loss.
Experiments show that it typically requires more epochs, but that it is on par with specialized methods and performs especially well for the difficult case of problems with uncertainty in the constraints, in terms of solution quality, scalability, or both.
arXiv Detail & Related papers (2023-07-11T12:32:13Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Leaving the Nest: Going Beyond Local Loss Functions for
Predict-Then-Optimize [57.22851616806617]
We show that our method achieves state-of-the-art results in four domains from the literature.
Our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
arXiv Detail & Related papers (2023-05-26T11:17:45Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Data-driven Prediction of Relevant Scenarios for Robust Optimization [0.0]
We study robust one- and two-stage problems with discrete uncertainty sets.
We propose a data-driven computation to seed the iterative solution method with a set of starting scenarios.
Our experiments show that predicting even a small number of good start scenarios by our method can considerably reduce the time of the iterative methods.
arXiv Detail & Related papers (2022-03-30T19:52:29Z) - Active Inference for Stochastic Control [1.3124513975412255]
Active inference has emerged as an alternative approach to control problems given its intuitive (probabilistic) formalism.
We build upon this work to assess the utility of active inference for a control setting.
Our results demonstrate the advantage of using active inference, compared to reinforcement learning, in both deterministic and partial observability.
arXiv Detail & Related papers (2021-08-27T12:51:42Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Time-varying Gaussian Process Bandit Optimization with Non-constant
Evaluation Time [93.6788993843846]
We propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time.
Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem.
arXiv Detail & Related papers (2020-03-10T13:28:33Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.