PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for
Linear and Integer Programming
- URL: http://arxiv.org/abs/2206.14234v2
- Date: Fri, 14 Apr 2023 14:21:42 GMT
- Title: PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for
Linear and Integer Programming
- Authors: Bo Tang, Elias B. Khalil
- Abstract summary: We present the PyEPO package, a PyTorchbased end-to-end predict-then-optimize library in Python.
PyEPO is the first such generic tool for linear and integer programming with predicted objective function coefficients.
- Score: 9.764407462807588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In deterministic optimization, it is typically assumed that all problem
parameters are fixed and known. In practice, however, some parameters may be a
priori unknown but can be estimated from historical data. A typical
predict-then-optimize approach separates predictions and optimization into two
stages. Recently, end-to-end predict-then-optimize has become an attractive
alternative. In this work, we present the PyEPO package, a PyTorchbased
end-to-end predict-then-optimize library in Python. To the best of our
knowledge, PyEPO (pronounced like pineapple with a silent "n") is the first
such generic tool for linear and integer programming with predicted objective
function coefficients. It provides four base algorithms: a convex surrogate
loss function from the seminal work of Elmachtoub and Grigas [16], a
differentiable black-box solver approach of Pogancic et al. [35], and two
differentiable perturbation-based methods from Berthet et al. [6]. PyEPO
provides a simple interface for the definition of new optimization problems,
the implementation of state-of-the-art predict-then-optimize training
algorithms, the use of custom neural network architectures, and the comparison
of end-to-end approaches with the two-stage approach. PyEPO enables us to
conduct a comprehensive set of experiments comparing a number of end-to-end and
two-stage approaches along axes such as prediction accuracy, decision quality,
and running time on problems such as Shortest Path, Multiple Knapsack, and the
Traveling Salesperson Problem. We discuss some empirical insights from these
experiments, which could guide future research. PyEPO and its documentation are
available at https://github.com/khalil-research/PyEPO.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Decision-focused predictions via pessimistic bilevel optimization: a computational study [0.7499722271664147]
Uncertainty in optimization parameters is an important and longstanding challenge.
We build predictive models that measure a emphregret measure on decisions taken with them.
We show various computational techniques to achieve tractability.
arXiv Detail & Related papers (2023-12-29T15:05:00Z) - PyBADS: Fast and robust black-box optimization in Python [11.4219428942199]
PyBADS is an implementation of the Adaptive Direct Search (BADS) algorithm for fast and robust black-box optimization.
It comes along with an easy-to-use Python interface for running the algorithm for running the results.
arXiv Detail & Related papers (2023-06-27T15:54:44Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Efficient first-order predictor-corrector multiple objective
optimization for fair misinformation detection [5.139559672771439]
Multiple-objective optimization (MOO) aims to simultaneously optimize multiple conflicting objectives and has found important applications in machine learning.
We propose a Gauss-Newton approximation that only scales linearly, and that requires only first-order inner-product per iteration.
The innovations make predictor-corrector possible for large networks.
arXiv Detail & Related papers (2022-09-15T12:32:15Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Planning and Learning with Adaptive Lookahead [74.39132848733847]
Policy Iteration (PI) algorithm alternates between greedy one-step policy improvement and policy evaluation.
Recent literature shows that multi-step lookahead policy improvement leads to a better convergence rate at the expense of increased complexity per iteration.
We propose for the first time to dynamically adapt the multi-step lookahead horizon as a function of the state and of the value estimate.
arXiv Detail & Related papers (2022-01-28T20:26:55Z) - PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python [8.603842959054939]
PEPit is a Python package enabling computer-assisted worst-case analyses of first-order optimization methods.
The package takes care of the SDP modeling parts, and the worst-case analysis is performed numerically via a standard solver.
arXiv Detail & Related papers (2022-01-11T16:35:22Z) - The Perils of Learning Before Optimizing [16.97597806975415]
We show how prediction models can be learned end-to-end by differentiating through the optimization task.
We show that the performance gap between a two-stage and end-to-end approach is closely related to the emphprice of correlation concept in optimization.
arXiv Detail & Related papers (2021-06-18T20:43:47Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.