Metrical Task Systems with Online Machine Learned Advice
- URL: http://arxiv.org/abs/2012.09394v1
- Date: Thu, 17 Dec 2020 04:56:51 GMT
- Title: Metrical Task Systems with Online Machine Learned Advice
- Authors: Kevin Rao
- Abstract summary: We show that augmenting online algorithms with a machine learned predictor can provably decrease the competitive ratio under as long as the predictor is suitably accurate.
We focus on the specific class of uniform task systems on $n$ tasks, for which the best deterministic algorithm is $O(n)$ competitive and the best randomized algorithm is $O(log n)$ competitive.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning algorithms are designed to make accurate predictions of the
future based on existing data, while online algorithms seek to bound some
performance measure (typically the competitive ratio) without knowledge of the
future. Lykouris and Vassilvitskii demonstrated that augmenting online
algorithms with a machine learned predictor can provably decrease the
competitive ratio under as long as the predictor is suitably accurate.
In this work we apply this idea to the Online Metrical Task System problem,
which was put forth by Borodin, Linial, and Saks as a general model for dynamic
systems processing tasks in an online fashion. We focus on the specific class
of uniform task systems on $n$ tasks, for which the best deterministic
algorithm is $O(n)$ competitive and the best randomized algorithm is $O(\log
n)$ competitive.
By giving an online algorithms access to a machine learned oracle with
absolute predictive error bounded above by $\eta_0$, we construct a
$\Theta(\min(\sqrt{\eta_0}, \log n))$ competitive algorithm for the uniform
case of the metrical task systems problem. We also give a $\Theta(\log
\sqrt{\eta_0})$ lower bound on the competitive ratio of any randomized
algorithm.
Related papers
- Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - Mixing predictions for online metric algorithms [34.849039387367455]
We design algorithms that combine predictions and are competitive against such dynamic combinations.
Our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time.
An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
arXiv Detail & Related papers (2023-04-04T13:18:00Z) - Streaming Algorithms for Learning with Experts: Deterministic Versus
Robust [62.98860182111096]
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times)
The goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set.
We show a space lower bound of $widetildeOmegaleft(fracnMRTright)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes.
arXiv Detail & Related papers (2023-03-03T04:39:53Z) - Solving the Online Assignment Problem with Machine Learned Advice [0.0]
We provide an online algorithm for the online assignment problem by simulating a machine learning algorithm that predicts the whole input in advance.
We show that as the Machine Learning prediction error increases, the solution quality decreases.
arXiv Detail & Related papers (2022-08-08T09:54:22Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Choosing the Right Algorithm With Hints From Complexity Theory [16.33500498939925]
We show that the Metropolis algorithm is clearly the best of all algorithms regarded for reasonable problem sizes.
An artificial algorithm of this type having an $O(n log n)$ runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem in time $O(n log n)$ with high probability.
arXiv Detail & Related papers (2021-09-14T11:12:32Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z) - Fast Learning for Renewal Optimization in Online Task Scheduling [18.935043109084543]
This paper considers online optimization of a renewal-reward system.
The probability distribution for the task type vector is unknown.
The proposed algorithm uses an auxiliary variable that is updated according to a classic Robbins-Monro iteration.
arXiv Detail & Related papers (2020-07-18T22:44:13Z) - Online Page Migration with ML Advice [26.929268665630342]
We consider online algorithms for the em page migration problem that use predictions, potentially imperfect, to improve their performance.
We show that if the algorithm is given a prediction of the input sequence, then it can achieve a competitive ratio that tends to $1$.
Our result adds to the recent body of work that uses machine learning to improve the performance of classic'' algorithms.
arXiv Detail & Related papers (2020-06-09T03:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.