Mixing predictions for online metric algorithms
- URL: http://arxiv.org/abs/2304.01781v2
- Date: Fri, 15 Dec 2023 20:24:55 GMT
- Title: Mixing predictions for online metric algorithms
- Authors: Antonios Antoniadis and Christian Coester and Marek Eli\'a\v{s} and
Adam Polak and Bertrand Simon
- Abstract summary: We design algorithms that combine predictions and are competitive against such dynamic combinations.
Our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time.
An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
- Score: 34.849039387367455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major technique in learning-augmented online algorithms is combining
multiple algorithms or predictors. Since the performance of each predictor may
vary over time, it is desirable to use not the single best predictor as a
benchmark, but rather a dynamic combination which follows different predictors
at different times. We design algorithms that combine predictions and are
competitive against such dynamic combinations for a wide class of online
problems, namely, metrical task systems. Against the best (in hindsight)
unconstrained combination of $\ell$ predictors, we obtain a competitive ratio
of $O(\ell^2)$, and show that this is best possible. However, for a benchmark
with slightly constrained number of switches between different predictors, we
can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be
adapted to access predictors in a bandit-like fashion, querying only one
predictor at a time. An unexpected implication of one of our lower bounds is a
new structural insight about covering formulations for the $k$-server problem.
Related papers
- Competitive Algorithms for Online Knapsack with Succinct Predictions [16.793099279933163]
In the online knapsack problem, the goal is to pack items arriving online with different values and weights into a capacity-limited knapsack to maximize the total value of the accepted items.
We study textitlearning-augmented algorithms for this problem, which aim to use machine-learned predictions to move beyond pessimistic worst-case guarantees.
arXiv Detail & Related papers (2024-06-26T20:38:00Z) - Competitive strategies to use "warm start" algorithms with predictions [12.970501425097645]
We consider the problem of learning and using predictions for warm start algorithms with predictions.
In this setting, an algorithm is given an instance of a problem, and a prediction of the solution.
We give competitive guarantees against stronger benchmarks that consider a set of $k$ predictions.
arXiv Detail & Related papers (2024-05-06T17:38:20Z) - Sorting and Hypergraph Orientation under Uncertainty with Predictions [0.45880283710344055]
We study learning-augmented algorithms for sorting and hypergraph orientation under uncertainty.
Our algorithms provide improved performance guarantees for accurate predictions while maintaining worst-case guarantees that are best possible without predictions.
arXiv Detail & Related papers (2023-05-16T07:52:08Z) - Streaming Algorithms for Learning with Experts: Deterministic Versus
Robust [62.98860182111096]
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times)
The goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set.
We show a space lower bound of $widetildeOmegaleft(fracnMRTright)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes.
arXiv Detail & Related papers (2023-03-03T04:39:53Z) - Minimalistic Predictions to Schedule Jobs with Online Precedence
Constraints [117.8317521974783]
We consider non-clairvoyant scheduling with online precedence constraints.
An algorithm is oblivious to any job dependencies and learns about a job only if all of its predecessors have been completed.
arXiv Detail & Related papers (2023-01-30T13:17:15Z) - Algorithms with Prediction Portfolios [23.703372221079306]
We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling.
For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
arXiv Detail & Related papers (2022-10-22T12:58:07Z) - Memory Bounds for the Experts Problem [53.67419690563877]
Online learning with expert advice is a fundamental problem of sequential prediction.
The goal is to process predictions, and make a prediction with the minimum cost.
An algorithm is judged by how well it does compared to the best expert in the set.
arXiv Detail & Related papers (2022-04-21T01:22:18Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Online Multivalid Learning: Means, Moments, and Prediction Intervals [16.75129633574157]
We present a technique for providing contextual predictions that are "multivalid" in various senses.
The resulting estimates correctly predict various statistics of the labels $y$ not just marginally.
Because our algorithms handle adversarially chosen examples, they can equally well be used to predict statistics of the residuals of arbitrary point prediction methods.
arXiv Detail & Related papers (2021-01-05T19:08:11Z) - Malicious Experts versus the multiplicative weights algorithm in online
prediction [85.62472761361107]
We consider a prediction problem with two experts and a forecaster.
We assume that one of the experts is honest and makes correct prediction with probability $mu$ at each round.
The other one is malicious, who knows true outcomes at each round and makes predictions in order to maximize the loss of the forecaster.
arXiv Detail & Related papers (2020-03-18T20:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.