Two-Stage Predict+Optimize for Mixed Integer Linear Programs with
Unknown Parameters in Constraints
- URL: http://arxiv.org/abs/2311.08022v1
- Date: Tue, 14 Nov 2023 09:32:02 GMT
- Title: Two-Stage Predict+Optimize for Mixed Integer Linear Programs with
Unknown Parameters in Constraints
- Authors: Xinyi Hu, Jasper C.H. Lee, Jimmy H.M. Lee
- Abstract summary: We give a new emphsimpler and emphmore powerful framework called emphTwo-Stage Predict+, which we believe should be the canonical framework for the Predict+ setting.
We also give a training algorithm usable for all mixed integer linear programs, vastly generalizing the applicability of the framework.
- Score: 16.15084484295732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consider the setting of constrained optimization, with some parameters
unknown at solving time and requiring prediction from relevant features.
Predict+Optimize is a recent framework for end-to-end training supervised
learning models for such predictions, incorporating information about the
optimization problem in the training process in order to yield better
predictions in terms of the quality of the predicted solution under the true
parameters. Almost all prior works have focused on the special case where the
unknowns appear only in the optimization objective and not the constraints. Hu
et al.~proposed the first adaptation of Predict+Optimize to handle unknowns
appearing in constraints, but the framework has somewhat ad-hoc elements, and
they provided a training algorithm only for covering and packing linear
programs. In this work, we give a new \emph{simpler} and \emph{more powerful}
framework called \emph{Two-Stage Predict+Optimize}, which we believe should be
the canonical framework for the Predict+Optimize setting. We also give a
training algorithm usable for all mixed integer linear programs, vastly
generalizing the applicability of the framework. Experimental results
demonstrate the superior prediction performance of our training framework over
all classical and state-of-the-art methods.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Decision-focused predictions via pessimistic bilevel optimization: a computational study [0.7499722271664147]
Uncertainty in optimization parameters is an important and longstanding challenge.
We build predictive models that measure a emphregret measure on decisions taken with them.
We show various computational techniques to achieve tractability.
arXiv Detail & Related papers (2023-12-29T15:05:00Z) - CaVE: A Cone-Aligned Approach for Fast Predict-then-optimize with Binary Linear Programs [23.00554768496448]
This work focuses on binary linear programs (BLPs) and proposes a new end-to-end training method to predict-then-optimize.
Our method, Cone-aligned Vector Estimation (CaVE), aligns the predicted cost vectors with the normal cone corresponding to the true optimal solution of a training instance.
arXiv Detail & Related papers (2023-12-12T20:24:19Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Predict+Optimize for Packing and Covering LPs with Unknown Parameters in
Constraints [5.762370982168012]
We propose a novel and practically relevant framework for the Predict+ setting, but with unknown parameters in both the objective and the constraints.
We introduce the notion of a correction function, and an additional penalty term in the loss function, modelling practical scenarios where an estimated optimal solution can be modified into a feasible solution after the true parameters are revealed.
Our approach is inspired by the prior work of Mandi and Guns, though with crucial modifications and re-derivations for our very different setting.
arXiv Detail & Related papers (2022-09-08T09:28:24Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Integrated Optimization of Predictive and Prescriptive Tasks [0.0]
We propose a new framework directly integrating predictive tasks under prescriptive tasks.
We train the parameters of predictive algorithm within a prescription problem via bilevel optimization techniques.
arXiv Detail & Related papers (2021-01-02T02:43:10Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.