Maximum Optimality Margin: A Unified Approach for Contextual Linear
Programming and Inverse Linear Programming
- URL: http://arxiv.org/abs/2301.11260v2
- Date: Sun, 28 May 2023 20:22:25 GMT
- Title: Maximum Optimality Margin: A Unified Approach for Contextual Linear
Programming and Inverse Linear Programming
- Authors: Chunlin Sun, Shang Liu, Xiaocheng Li
- Abstract summary: We develop a new approach to the problem called maximum optimality margin which the machine learning loss function by the optimality condition of the downstream optimization.
- Score: 10.06803520598035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the predict-then-optimize problem where the output of
a machine learning prediction task is used as the input of some downstream
optimization problem, say, the objective coefficient vector of a linear
program. The problem is also known as predictive analytics or contextual linear
programming. The existing approaches largely suffer from either (i)
optimization intractability (a non-convex objective function)/statistical
inefficiency (a suboptimal generalization bound) or (ii) requiring strong
condition(s) such as no constraint or loss calibration. We develop a new
approach to the problem called \textit{maximum optimality margin} which designs
the machine learning loss function by the optimality condition of the
downstream optimization. The max-margin formulation enjoys both computational
efficiency and good theoretical properties for the learning procedure. More
importantly, our new approach only needs the observations of the optimal
solution in the training data rather than the objective function, which makes
it a new and natural approach to the inverse linear programming problem under
both contextual and context-free settings; we also analyze the proposed method
under both offline and online settings, and demonstrate its performance using
numerical experiments.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Operator SVD with Neural Networks via Nested Low-Rank Approximation [19.562492156734653]
This paper proposes a new optimization framework based on the low-rank approximation characterization of a truncated singular value decomposition.
New techniques called emphnesting for learning the top-$L$ singular values and singular functions in the correct order.
We demonstrate the effectiveness of the proposed framework for use cases in computational physics and machine learning.
arXiv Detail & Related papers (2024-02-06T03:06:06Z) - CaVE: A Cone-Aligned Approach for Fast Predict-then-optimize with Binary Linear Programs [23.00554768496448]
This work focuses on binary linear programs (BLPs) and proposes a new end-to-end training method to predict-then-optimize.
Our method, Cone-aligned Vector Estimation (CaVE), aligns the predicted cost vectors with the normal cone corresponding to the true optimal solution of a training instance.
arXiv Detail & Related papers (2023-12-12T20:24:19Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Teaching Networks to Solve Optimization Problems [13.803078209630444]
We propose to replace the iterative solvers altogether with a trainable parametric set function.
We show the feasibility of learning such parametric (set) functions to solve various classic optimization problems.
arXiv Detail & Related papers (2022-02-08T19:13:13Z) - Integrated Optimization of Predictive and Prescriptive Tasks [0.0]
We propose a new framework directly integrating predictive tasks under prescriptive tasks.
We train the parameters of predictive algorithm within a prescription problem via bilevel optimization techniques.
arXiv Detail & Related papers (2021-01-02T02:43:10Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.