Cocktail Edge Caching: Ride Dynamic Trends of Content Popularity with
Ensemble Learning
- URL: http://arxiv.org/abs/2101.05885v1
- Date: Thu, 14 Jan 2021 21:59:04 GMT
- Title: Cocktail Edge Caching: Ride Dynamic Trends of Content Popularity with
Ensemble Learning
- Authors: Tongyu Zong, Chen Li, Yuanyuan Lei, Guangyu Li, Houwei Cao, Yong Liu
- Abstract summary: Edge caching will play a critical role in facilitating the emerging content-rich applications.
It faces many new challenges, in particular, the highly dynamic content popularity and the heterogeneous caching computation.
We propose Cocktail Edge Caching, that tackles the dynamic popularity and heterogeneity through ensemble learning.
- Score: 10.930268276150262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge caching will play a critical role in facilitating the emerging
content-rich applications. However, it faces many new challenges, in
particular, the highly dynamic content popularity and the heterogeneous caching
configurations. In this paper, we propose Cocktail Edge Caching, that tackles
the dynamic popularity and heterogeneity through ensemble learning. Instead of
trying to find a single dominating caching policy for all the caching
scenarios, we employ an ensemble of constituent caching policies and adaptively
select the best-performing policy to control the cache. Towards this goal, we
first show through formal analysis and experiments that different variations of
the LFU and LRU policies have complementary performance in different caching
scenarios. We further develop a novel caching algorithm that enhances LFU/LRU
with deep recurrent neural network (LSTM) based time-series analysis. Finally,
we develop a deep reinforcement learning agent that adaptively combines base
caching policies according to their virtual hit ratios on parallel virtual
caches. Through extensive experiments driven by real content requests from two
large video streaming platforms, we demonstrate that CEC not only consistently
outperforms all single policies, but also improves the robustness of them. CEC
can be well generalized to different caching scenarios with low computation
overheads for deployment.
Related papers
- Efficient Inference of Vision Instruction-Following Models with Elastic Cache [76.44955111634545]
We introduce Elastic Cache, a novel strategy for efficient deployment of instruction-following large vision-language models.
We propose an importance-driven cache merging strategy to prune redundancy caches.
For instruction encoding, we utilize the frequency to evaluate the importance of caches.
Results on a range of LVLMs demonstrate that Elastic Cache not only boosts efficiency but also notably outperforms existing pruning methods in language generation.
arXiv Detail & Related papers (2024-07-25T15:29:05Z) - Attention-Enhanced Prioritized Proximal Policy Optimization for Adaptive Edge Caching [4.2579244769567675]
We introduce a Proximal Policy Optimization (PPO)-based caching strategy that fully considers file attributes like lifetime, size, and priority.
Our method outperforms a recent Deep Reinforcement Learning-based technique.
arXiv Detail & Related papers (2024-02-08T17:17:46Z) - A Learning-Based Caching Mechanism for Edge Content Delivery [2.412158290827225]
5G networks and the rise of the Internet of Things (IoT) are increasingly extending into the network edge.
This shift introduces unique challenges, particularly due to the limited cache storage and the diverse request patterns at the edge.
We introduce HR-Cache, a learning-based caching framework grounded in the principles of Hazard Rate (HR) ordering.
arXiv Detail & Related papers (2024-02-05T08:06:03Z) - Optimistic No-regret Algorithms for Discrete Caching [6.182368229968862]
We take a systematic look at the problem of storing whole files in a cache with limited capacity in the context of optimistic learning.
We provide a universal lower bound for prediction-assisted online caching and design a suite of policies with a range of performance-complexity trade-offs.
Our results substantially improve upon all recently-proposed online caching policies, which, being unable to exploit the oracle predictions, offer only $O(sqrtT)$ regret.
arXiv Detail & Related papers (2022-08-15T09:18:41Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Accelerating Deep Learning Classification with Error-controlled
Approximate-key Caching [72.50506500576746]
We propose a novel caching paradigm, that we named approximate-key caching.
While approximate cache hits alleviate DL inference workload and increase the system throughput, they however introduce an approximation error.
We analytically model our caching system performance for classic LRU and ideal caches, we perform a trace-driven evaluation of the expected performance, and we compare the benefits of our proposed approach with the state-of-the-art similarity caching.
arXiv Detail & Related papers (2021-12-13T13:49:11Z) - Temporal-attentive Covariance Pooling Networks for Video Recognition [52.853765492522655]
existing video architectures usually generate global representation by using a simple global average pooling (GAP) method.
This paper proposes a attentive Covariance Pooling( TCP- TCP), inserted at the end of deep architectures, to produce powerful video representations.
Our TCP is model-agnostic and can be flexibly integrated into any video architectures, resulting in TCPNet for effective video recognition.
arXiv Detail & Related papers (2021-10-27T12:31:29Z) - A Survey of Deep Learning for Data Caching in Edge Network [1.9798034349981157]
This paper summarizes the utilization of deep learning for data caching in edge network.
We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure.
Then, a number of key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning as well as reinforcement learning.
arXiv Detail & Related papers (2020-08-17T12:02:32Z) - Caching Placement and Resource Allocation for Cache-Enabling UAV NOMA
Networks [87.6031308969681]
This article investigates the cache-enabling unmanned aerial vehicle (UAV) cellular networks with massive access capability supported by non-orthogonal multiple access (NOMA)
We formulate the long-term caching placement and resource allocation optimization problem for content delivery delay minimization as a Markov decision process (MDP)
We propose a Q-learning based caching placement and resource allocation algorithm, where the UAV learns and selects action with emphsoft $varepsilon$-greedy strategy to search for the optimal match between actions and states.
arXiv Detail & Related papers (2020-08-12T08:33:51Z) - Reinforcement Learning for Caching with Space-Time Popularity Dynamics [61.55827760294755]
caching is envisioned to play a critical role in next-generation networks.
To intelligently prefetch and store contents, a cache node should be able to learn what and when to cache.
This chapter presents a versatile reinforcement learning based approach for near-optimal caching policy design.
arXiv Detail & Related papers (2020-05-19T01:23:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.