Learning-Augmented Algorithms for Online Linear and Semidefinite
Programming
- URL: http://arxiv.org/abs/2209.10614v1
- Date: Wed, 21 Sep 2022 19:16:29 GMT
- Title: Learning-Augmented Algorithms for Online Linear and Semidefinite
Programming
- Authors: Elena Grigorescu, Young-San Lin, Sandeep Silwal, Maoyuan Song, Samson
Zhou
- Abstract summary: Semidefinite programming (SDP) is a unifying framework that generalizes both linear and quadratically-constrained programming.
There exist known impossibility results for approxing the optimal solution when constraints for covering SDPs arrive in an online fashion.
We show that if the predictor is accurate, we can efficiently bypass these impossibility results and achieve a constant-factor approximation to the optimal solution.
- Score: 9.849604820019394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semidefinite programming (SDP) is a unifying framework that generalizes both
linear programming and quadratically-constrained quadratic programming, while
also yielding efficient solvers, both in theory and in practice. However, there
exist known impossibility results for approximating the optimal solution when
constraints for covering SDPs arrive in an online fashion. In this paper, we
study online covering linear and semidefinite programs in which the algorithm
is augmented with advice from a possibly erroneous predictor. We show that if
the predictor is accurate, we can efficiently bypass these impossibility
results and achieve a constant-factor approximation to the optimal solution,
i.e., consistency. On the other hand, if the predictor is inaccurate, under
some technical conditions, we achieve results that match both the classical
optimal upper bounds and the tight lower bounds up to constant factors, i.e.,
robustness.
More broadly, we introduce a framework that extends both (1) the online set
cover problem augmented with machine-learning predictors, studied by Bamas,
Maggiori, and Svensson (NeurIPS 2020), and (2) the online covering SDP problem,
initiated by Elad, Kale, and Naor (ICALP 2016). Specifically, we obtain general
online learning-augmented algorithms for covering linear programs with
fractional advice and constraints, and initiate the study of learning-augmented
algorithms for covering SDP problems.
Our techniques are based on the primal-dual framework of Buchbinder and Naor
(Mathematics of Operations Research, 34, 2009) and can be further adjusted to
handle constraints where the variables lie in a bounded region, i.e., box
constraints.
Related papers
- Learning-Augmented Algorithms for Online Concave Packing and Convex Covering Problems [4.9826534303287335]
We present learning-augmented algorithmic frameworks for two fundamental optimizations settings.
For online packing with concave objectives, we present a simple but overarching strategy that switches between the advice and the state-of-the-art online algorithm.
We show that our algorithms break impossibility results when the advice is accurate, while maintaining comparable performance with state-of-the-art classical online algorithms even when the advice is erroneous.
arXiv Detail & Related papers (2024-11-13T04:27:25Z) - A Simple Learning-Augmented Algorithm for Online Packing with Concave Objectives [4.9826534303287335]
We introduce and analyze a simple learning-augmented algorithm for online packing problems with linear constraints and concave objectives.
We further raise the problem of understanding necessary and sufficient conditions for when such simple black-box solutions may be optimal.
arXiv Detail & Related papers (2024-06-05T18:39:28Z) - Learning Adversarial MDPs with Stochastic Hard Constraints [37.24692425018]
We study online learning problems in constrained decision processes with adversarial losses and hard constraints.
We design an algorithm that achieves sublinear regret while ensuring that the constraints are satisfied at every episode with high probability.
arXiv Detail & Related papers (2024-03-06T12:49:08Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Truly No-Regret Learning in Constrained MDPs [61.78619476991494]
We propose a model-based primal-dual algorithm to learn in an unknown CMDP.
We prove that our algorithm achieves sublinear regret without error cancellations.
arXiv Detail & Related papers (2024-02-24T09:47:46Z) - Efficient Methods for Non-stationary Online Learning [67.3300478545554]
We present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $mathcalO(log T)$ to $1$.
Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods.
arXiv Detail & Related papers (2023-09-16T07:30:12Z) - An Online Algorithm for Chance Constrained Resource Allocation [10.791923293928987]
This paper studies the online resource allocation problem (RAP) with chance constraints.
To the best of our knowledge, this is the first time chance constraints are introduced in the online RAP problem.
arXiv Detail & Related papers (2023-03-06T16:17:19Z) - Minimalistic Predictions to Schedule Jobs with Online Precedence
Constraints [117.8317521974783]
We consider non-clairvoyant scheduling with online precedence constraints.
An algorithm is oblivious to any job dependencies and learns about a job only if all of its predecessors have been completed.
arXiv Detail & Related papers (2023-01-30T13:17:15Z) - Generalization Bounds for Data-Driven Numerical Linear Algebra [24.961270871124217]
Data-driven algorithms can adapt their internal structure or parameters to inputs from unknown application-specific distributions, by learning from a training sample of inputs.
Several recent works have applied this approach to problems in numerical linear algebra, obtaining significant empirical gains in performance.
In this work we prove generalization bounds for those algorithms, within the PAC-learning framework for data-driven algorithm selection proposed by Gupta and Roughgarden.
arXiv Detail & Related papers (2022-06-16T02:23:45Z) - Smoothed Online Learning is as Easy as Statistical Learning [77.00766067963195]
We provide the first oracle-efficient, no-regret algorithms in this setting.
We show that if a function class is learnable in the classical setting, then there is an oracle-efficient, no-regret algorithm for contextual bandits.
arXiv Detail & Related papers (2022-02-09T19:22:34Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.