Interior Point Solving for LP-based prediction+optimisation
- URL: http://arxiv.org/abs/2010.13943v1
- Date: Mon, 26 Oct 2020 23:05:21 GMT
- Title: Interior Point Solving for LP-based prediction+optimisation
- Authors: Jayanta Mandi, Tias Guns
- Abstract summary: We investigate the use of the more principled logarithmic barrier term, as widely used in interior point solvers for linear programming.
Our approach performs as good as if not better than the state-of-the-art QPTL (Quadratic Programming task loss) formulation of Wilder et al. and SPO approach of Elmachtoub and Grigas.
- Score: 14.028706088791473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving optimization problems is the key to decision making in many real-life
analytics applications. However, the coefficients of the optimization problems
are often uncertain and dependent on external factors, such as future demand or
energy or stock prices. Machine learning (ML) models, especially neural
networks, are increasingly being used to estimate these coefficients in a
data-driven way. Hence, end-to-end predict-and-optimize approaches, which
consider how effective the predicted values are to solve the optimization
problem, have received increasing attention. In case of integer linear
programming problems, a popular approach to overcome their non-differentiabilty
is to add a quadratic penalty term to the continuous relaxation, such that
results from differentiating over quadratic programs can be used. Instead we
investigate the use of the more principled logarithmic barrier term, as widely
used in interior point solvers for linear programming. Specifically, instead of
differentiating the KKT conditions, we consider the homogeneous self-dual
formulation of the LP and we show the relation between the interior point step
direction and corresponding gradients needed for learning. Finally our
empirical experiments demonstrate our approach performs as good as if not
better than the state-of-the-art QPTL (Quadratic Programming task loss)
formulation of Wilder et al. and SPO approach of Elmachtoub and Grigas.
Related papers
- Smart Predict-then-Optimize Method with Dependent Data: Risk Bounds and Calibration of Autoregression [7.369846475695131]
We present an autoregressive SPO method directly targeting the optimization problem at the decision stage.
We conduct experiments to demonstrate the effectiveness of the SPO+ surrogate compared to the absolute loss and the least squares loss.
arXiv Detail & Related papers (2024-11-19T17:02:04Z) - Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization [0.0]
We present an integrated learning and optimization procedure that yields the best approximation of an unknown situation.
Numerical results conducted with inventory problems from the literature as well as a bike-sharing problem with real data demonstrate that the proposed approach performs well.
arXiv Detail & Related papers (2024-11-05T21:54:50Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Maximum Optimality Margin: A Unified Approach for Contextual Linear
Programming and Inverse Linear Programming [10.06803520598035]
We develop a new approach to the problem called maximum optimality margin which the machine learning loss function by the optimality condition of the downstream optimization.
arXiv Detail & Related papers (2023-01-26T17:53:38Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Efficient Learning of Decision-Making Models: A Penalty Block Coordinate
Descent Algorithm for Data-Driven Inverse Optimization [12.610576072466895]
We consider the inverse problem where we use prior decision data to uncover the underlying decision-making process.
This statistical learning problem is referred to as data-driven inverse optimization.
We propose an efficient block coordinate descent-based algorithm to solve large problem instances.
arXiv Detail & Related papers (2022-10-27T12:52:56Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Consistent Second-Order Conic Integer Programming for Learning Bayesian
Networks [2.7473982588529653]
We study the problem of learning the sparse DAG structure of a BN from continuous observational data.
The optimal solution to this mathematical program is known to have desirable statistical properties under certain conditions.
We propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution.
arXiv Detail & Related papers (2020-05-29T00:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.