Robust Learning-Augmented Caching: An Experimental Study
- URL: http://arxiv.org/abs/2106.14693v1
- Date: Mon, 28 Jun 2021 13:15:07 GMT
- Title: Robust Learning-Augmented Caching: An Experimental Study
- Authors: Jakub Ch{\l}\k{e}dowski, Adam Polak, Bartosz Szabucki, Konrad Zolna
- Abstract summary: Key optimization problem arising in caching cannot be optimally solved without knowing the future.
New field of learning-augmented algorithms proposes solutions that leverage classical online caching algorithms.
We show that a straightforward method has only a low overhead over a well-performing predictor.
- Score: 8.962235853317996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective caching is crucial for the performance of modern-day computing
systems. A key optimization problem arising in caching -- which item to evict
to make room for a new item -- cannot be optimally solved without knowing the
future. There are many classical approximation algorithms for this problem, but
more recently researchers started to successfully apply machine learning to
decide what to evict by discovering implicit input patterns and predicting the
future. While machine learning typically does not provide any worst-case
guarantees, the new field of learning-augmented algorithms proposes solutions
that leverage classical online caching algorithms to make the machine-learned
predictors robust. We are the first to comprehensively evaluate these
learning-augmented algorithms on real-world caching datasets and
state-of-the-art machine-learned predictors. We show that a straightforward
method -- blindly following either a predictor or a classical robust algorithm,
and switching whenever one becomes worse than the other -- has only a low
overhead over a well-performing predictor, while competing with classical
methods when the coupled predictor fails, thus providing a cheap worst-case
insurance.
Related papers
- Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - Algorithms with Prediction Portfolios [23.703372221079306]
We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling.
For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
arXiv Detail & Related papers (2022-10-22T12:58:07Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Parsimonious Learning-Augmented Caching [29.975391787684966]
We introduce and study the setting in which the learning-augmented algorithm can utilize the predictions parsimoniously.
We show that one can achieve quantitatively similar results but only using a sublinear number of predictions.
arXiv Detail & Related papers (2022-02-09T03:40:11Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - A Novel Prediction Setup for Online Speed-Scaling [3.3440413258080577]
It is fundamental to incorporate energy considerations when designing (scheduling) algorithms.
This paper attempts to obtain the best of both worlds for the classical, deadline based, online speed-scaling problem.
arXiv Detail & Related papers (2021-12-06T14:46:20Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.