Online Caching with no Regret: Optimistic Learning via Recommendations
- URL: http://arxiv.org/abs/2204.09345v1
- Date: Wed, 20 Apr 2022 09:29:47 GMT
- Title: Online Caching with no Regret: Optimistic Learning via Recommendations
- Authors: Naram Mhaisen and George Iosifidis and Douglas Leith
- Abstract summary: We build upon the Follow-the-Regularized-Leader (FTRL) framework to include predictions for the file requests.
We extend the framework to learn and utilize the best request predictor in cases where many are available.
We prove that the proposed optimistic learning caching policies can achieve sub-zero performance loss (regret) for perfect predictions.
- Score: 15.877673959068458
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The design of effective online caching policies is an increasingly important
problem for content distribution networks, online social networks and edge
computing services, among other areas. This paper proposes a new algorithmic
toolbox for tackling this problem through the lens of optimistic online
learning. We build upon the Follow-the-Regularized-Leader (FTRL) framework,
which is developed further here to include predictions for the file requests,
and we design online caching algorithms for bipartite networks with fixed-size
caches or elastic leased caches subject to time-average budget constraints. The
predictions are provided by a content recommendation system that influences the
users viewing activity and hence can naturally reduce the caching network's
uncertainty about future requests. We also extend the framework to learn and
utilize the best request predictor in cases where many are available. We prove
that the proposed {optimistic} learning caching policies can achieve sub-zero
performance loss (regret) for perfect predictions, and maintain the sub-linear
regret bound $O(\sqrt T)$, which is the best achievable bound for policies that
do not use predictions, even for arbitrary-bad predictions. The performance of
the proposed algorithms is evaluated with detailed trace-driven numerical
tests.
Related papers
- An Online Gradient-Based Caching Policy with Logarithmic Complexity and Regret Guarantees [13.844896723580858]
We introduce a new variant of the gradient-based online caching policy that achieves groundbreaking logarithmic computational complexity.
This advancement allows us to test the policy on large-scale, real-world traces featuring millions of requests and items.
arXiv Detail & Related papers (2024-05-02T13:11:53Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Optimistic No-regret Algorithms for Discrete Caching [6.182368229968862]
We take a systematic look at the problem of storing whole files in a cache with limited capacity in the context of optimistic learning.
We provide a universal lower bound for prediction-assisted online caching and design a suite of policies with a range of performance-complexity trade-offs.
Our results substantially improve upon all recently-proposed online caching policies, which, being unable to exploit the oracle predictions, offer only $O(sqrtT)$ regret.
arXiv Detail & Related papers (2022-08-15T09:18:41Z) - Provably Efficient Reinforcement Learning for Online Adaptive Influence
Maximization [53.11458949694947]
We consider an adaptive version of content-dependent online influence problem where seed nodes are sequentially activated based on realtime feedback.
Our algorithm maintains a network model estimate and selects seed adaptively, exploring the social network while improving the optimal policy optimistically.
arXiv Detail & Related papers (2022-06-29T18:17:28Z) - Online Caching with Optimistic Learning [15.877673959068458]
This paper proposes a new algorithmic toolbox for tackling this problem through the lens of optimistic online learning.
We design online caching algorithms for bipartite networks with fixed-size caches or elastic leased caches subject to time-average budget constraints.
We prove that the proposed optimistic learning caching policies can achieve sub-zero performance loss (regret) for perfect predictions, and maintain the best achievable regret bound $O(sqrt T)$ even for arbitrary-bad predictions.
arXiv Detail & Related papers (2022-02-22T00:04:30Z) - Accelerating Deep Learning Classification with Error-controlled
Approximate-key Caching [72.50506500576746]
We propose a novel caching paradigm, that we named approximate-key caching.
While approximate cache hits alleviate DL inference workload and increase the system throughput, they however introduce an approximation error.
We analytically model our caching system performance for classic LRU and ideal caches, we perform a trace-driven evaluation of the expected performance, and we compare the benefits of our proposed approach with the state-of-the-art similarity caching.
arXiv Detail & Related papers (2021-12-13T13:49:11Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online
Algorithms [85.97516436641533]
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions.
The goal is to design algorithms that are both consistent and robust.
We provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions.
arXiv Detail & Related papers (2020-10-22T04:51:01Z) - A Survey of Deep Learning for Data Caching in Edge Network [1.9798034349981157]
This paper summarizes the utilization of deep learning for data caching in edge network.
We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure.
Then, a number of key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning as well as reinforcement learning.
arXiv Detail & Related papers (2020-08-17T12:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.