Branch & Learn with Post-hoc Correction for Predict+Optimize with
Unknown Parameters in Constraints
- URL: http://arxiv.org/abs/2303.06698v1
- Date: Sun, 12 Mar 2023 16:23:58 GMT
- Title: Branch & Learn with Post-hoc Correction for Predict+Optimize with
Unknown Parameters in Constraints
- Authors: Xinyi Hu, Jasper C.H. Lee, Jimmy H.M. Lee
- Abstract summary: Post-hoc Regret is a loss function that takes into account the cost of correcting an unsatisfiable prediction.
We show how to compute Post-hoc Regret exactly for any optimization problem solvable by a recursion algorithm satisfying simple conditions.
- Score: 5.762370982168012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining machine learning and constrained optimization, Predict+Optimize
tackles optimization problems containing parameters that are unknown at the
time of solving. Prior works focus on cases with unknowns only in the
objectives. A new framework was recently proposed to cater for unknowns also in
constraints by introducing a loss function, called Post-hoc Regret, that takes
into account the cost of correcting an unsatisfiable prediction. Since Post-hoc
Regret is non-differentiable, the previous work computes only its
approximation. While the notion of Post-hoc Regret is general, its specific
implementation is applicable to only packing and covering linear programming
problems. In this paper, we first show how to compute Post-hoc Regret exactly
for any optimization problem solvable by a recursive algorithm satisfying
simple conditions. Experimentation demonstrates substantial improvement in the
quality of solutions as compared to the earlier approximation approach.
Furthermore, we show experimentally the empirical behavior of different
combinations of correction and penalty functions used in the Post-hoc Regret of
the same benchmarks. Results provide insights for defining the appropriate
Post-hoc Regret in different application scenarios.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Principled Preferential Bayesian Optimization [22.269732173306192]
We study the problem of preferential Bayesian optimization (BO)
We aim to optimize a black-box function with only preference feedback over a pair of candidate solutions.
An optimistic algorithm with an efficient computational method is then developed to solve the problem.
arXiv Detail & Related papers (2024-02-08T02:57:47Z) - Time-Varying Gaussian Process Bandits with Unknown Prior [18.93478528448966]
PE-GP-UCB is capable of solving time-varying Bayesian optimisation problems.
It relies on the fact that either the observed function values are consistent with some of the priors.
arXiv Detail & Related papers (2024-02-02T18:52:16Z) - Two-Stage Predict+Optimize for Mixed Integer Linear Programs with
Unknown Parameters in Constraints [16.15084484295732]
We give a new emphsimpler and emphmore powerful framework called emphTwo-Stage Predict+, which we believe should be the canonical framework for the Predict+ setting.
We also give a training algorithm usable for all mixed integer linear programs, vastly generalizing the applicability of the framework.
arXiv Detail & Related papers (2023-11-14T09:32:02Z) - You Shall Pass: Dealing with the Zero-Gradient Problem in Predict and
Optimize for Convex Optimization [1.98873083514863]
Predict and optimize is an increasingly popular decision-making paradigm that employs machine learning to predict unknown parameters of optimization problems.
The key challenge to train such models is the computation of the Jacobian of the solution of the optimization problem with respect to its parameters.
This paper demonstrates that the zero-gradient problem appears in the non-linear case as well -- the Jacobian can have a sizeable null space, thereby causing the training process to get stuck in suboptimal points.
arXiv Detail & Related papers (2023-07-30T19:14:05Z) - Improved Regret for Efficient Online Reinforcement Learning with Linear
Function Approximation [69.0695698566235]
We study reinforcement learning with linear function approximation and adversarially changing cost functions.
We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback.
arXiv Detail & Related papers (2023-01-30T17:26:39Z) - Minimalistic Predictions to Schedule Jobs with Online Precedence
Constraints [117.8317521974783]
We consider non-clairvoyant scheduling with online precedence constraints.
An algorithm is oblivious to any job dependencies and learns about a job only if all of its predecessors have been completed.
arXiv Detail & Related papers (2023-01-30T13:17:15Z) - Predict+Optimize for Packing and Covering LPs with Unknown Parameters in
Constraints [5.762370982168012]
We propose a novel and practically relevant framework for the Predict+ setting, but with unknown parameters in both the objective and the constraints.
We introduce the notion of a correction function, and an additional penalty term in the loss function, modelling practical scenarios where an estimated optimal solution can be modified into a feasible solution after the true parameters are revealed.
Our approach is inspired by the prior work of Mandi and Guns, though with crucial modifications and re-derivations for our very different setting.
arXiv Detail & Related papers (2022-09-08T09:28:24Z) - Efficient and Optimal Algorithms for Contextual Dueling Bandits under
Realizability [59.81339109121384]
We study the $K$ contextual dueling bandit problem, a sequential decision making setting in which the learner uses contextual information to make two decisions, but only observes emphpreference-based feedback suggesting that one decision was better than the other.
We provide a new algorithm that achieves the optimal regret rate for a new notion of best response regret, which is a strictly stronger performance measure than those considered in prior works.
arXiv Detail & Related papers (2021-11-24T07:14:57Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.