Optimistic No-regret Algorithms for Discrete Caching
- URL: http://arxiv.org/abs/2208.06414v1
- Date: Mon, 15 Aug 2022 09:18:41 GMT
- Title: Optimistic No-regret Algorithms for Discrete Caching
- Authors: Naram Mhaisen, Abhishek Sinha, Georgios Paschos, Georgios Iosifidis
- Abstract summary: We take a systematic look at the problem of storing whole files in a cache with limited capacity in the context of optimistic learning.
We provide a universal lower bound for prediction-assisted online caching and design a suite of policies with a range of performance-complexity trade-offs.
Our results substantially improve upon all recently-proposed online caching policies, which, being unable to exploit the oracle predictions, offer only $O(sqrtT)$ regret.
- Score: 6.182368229968862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We take a systematic look at the problem of storing whole files in a cache
with limited capacity in the context of optimistic learning, where the caching
policy has access to a prediction oracle (provided by, e.g., a Neural Network).
The successive file requests are assumed to be generated by an adversary, and
no assumption is made on the accuracy of the oracle. In this setting, we
provide a universal lower bound for prediction-assisted online caching and
proceed to design a suite of policies with a range of performance-complexity
trade-offs. All proposed policies offer sublinear regret bounds commensurate
with the accuracy of the oracle. Our results substantially improve upon all
recently-proposed online caching policies, which, being unable to exploit the
oracle predictions, offer only $O(\sqrt{T})$ regret. In this pursuit, we
design, to the best of our knowledge, the first comprehensive optimistic
Follow-the-Perturbed leader policy, which generalizes beyond the caching
problem. We also study the problem of caching files with different sizes and
the bipartite network caching problem. Finally, we evaluate the efficacy of the
proposed policies through extensive numerical experiments using real-world
traces.
Related papers
- On the Regret of Coded Caching with Adversarial Requests [7.171698704686835]
We study the well-known coded caching problem in an online learning framework, wherein requests arrive sequentially.
We introduce a caching policy based on the Follow-The-Perturbed-Leader principle and show that for any time horizon T, it achieves a sub-linear regret of mathcalO(sqrt(T).
arXiv Detail & Related papers (2024-09-19T01:13:03Z) - An Online Gradient-Based Caching Policy with Logarithmic Complexity and Regret Guarantees [13.844896723580858]
We introduce a new variant of the gradient-based online caching policy that achieves groundbreaking logarithmic computational complexity.
This advancement allows us to test the policy on large-scale, real-world traces featuring millions of requests and items.
arXiv Detail & Related papers (2024-05-02T13:11:53Z) - Minimalistic Predictions to Schedule Jobs with Online Precedence
Constraints [117.8317521974783]
We consider non-clairvoyant scheduling with online precedence constraints.
An algorithm is oblivious to any job dependencies and learns about a job only if all of its predecessors have been completed.
arXiv Detail & Related papers (2023-01-30T13:17:15Z) - A Universal Error Measure for Input Predictions Applied to Online Graph
Problems [57.58926849872494]
We introduce a novel measure for quantifying the error in input predictions.
The measure captures errors due to absent predicted requests as well as unpredicted actual requests.
arXiv Detail & Related papers (2022-05-25T15:24:03Z) - Online Caching with no Regret: Optimistic Learning via Recommendations [15.877673959068458]
We build upon the Follow-the-Regularized-Leader (FTRL) framework to include predictions for the file requests.
We extend the framework to learn and utilize the best request predictor in cases where many are available.
We prove that the proposed optimistic learning caching policies can achieve sub-zero performance loss (regret) for perfect predictions.
arXiv Detail & Related papers (2022-04-20T09:29:47Z) - Online Caching with Optimistic Learning [15.877673959068458]
This paper proposes a new algorithmic toolbox for tackling this problem through the lens of optimistic online learning.
We design online caching algorithms for bipartite networks with fixed-size caches or elastic leased caches subject to time-average budget constraints.
We prove that the proposed optimistic learning caching policies can achieve sub-zero performance loss (regret) for perfect predictions, and maintain the best achievable regret bound $O(sqrt T)$ even for arbitrary-bad predictions.
arXiv Detail & Related papers (2022-02-22T00:04:30Z) - Accelerating Deep Learning Classification with Error-controlled
Approximate-key Caching [72.50506500576746]
We propose a novel caching paradigm, that we named approximate-key caching.
While approximate cache hits alleviate DL inference workload and increase the system throughput, they however introduce an approximation error.
We analytically model our caching system performance for classic LRU and ideal caches, we perform a trace-driven evaluation of the expected performance, and we compare the benefits of our proposed approach with the state-of-the-art similarity caching.
arXiv Detail & Related papers (2021-12-13T13:49:11Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Cocktail Edge Caching: Ride Dynamic Trends of Content Popularity with
Ensemble Learning [10.930268276150262]
Edge caching will play a critical role in facilitating the emerging content-rich applications.
It faces many new challenges, in particular, the highly dynamic content popularity and the heterogeneous caching computation.
We propose Cocktail Edge Caching, that tackles the dynamic popularity and heterogeneity through ensemble learning.
arXiv Detail & Related papers (2021-01-14T21:59:04Z) - Reinforcement Learning for Caching with Space-Time Popularity Dynamics [61.55827760294755]
caching is envisioned to play a critical role in next-generation networks.
To intelligently prefetch and store contents, a cache node should be able to learn what and when to cache.
This chapter presents a versatile reinforcement learning based approach for near-optimal caching policy design.
arXiv Detail & Related papers (2020-05-19T01:23:51Z) - Non-Cooperative Game Theory Based Rate Adaptation for Dynamic Video
Streaming over HTTP [89.30855958779425]
Dynamic Adaptive Streaming over HTTP (DASH) has demonstrated to be an emerging and promising multimedia streaming technique.
We propose a novel algorithm to optimally allocate the limited export bandwidth of the server to multi-users to maximize their Quality of Experience (QoE) with fairness guaranteed.
arXiv Detail & Related papers (2019-12-27T01:19:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.