Predictive Edge Caching through Deep Mining of Sequential Patterns in
User Content Retrievals
- URL: http://arxiv.org/abs/2210.02657v1
- Date: Thu, 6 Oct 2022 03:24:19 GMT
- Title: Predictive Edge Caching through Deep Mining of Sequential Patterns in
User Content Retrievals
- Authors: Chen Li, Xiaoyu Wang, Tongyu Zong, Houwei Cao, Yong Liu
- Abstract summary: We propose a novel Predictive Edge Caching (PEC) system that predicts the future content popularity using fine-grained learning models.
PEC can adapt to highly dynamic content popularity, and significantly improve cache hit ratio and reduce user content retrieval latency.
- Score: 34.716416311132946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge caching plays an increasingly important role in boosting user content
retrieval performance while reducing redundant network traffic. The
effectiveness of caching ultimately hinges on the accuracy of predicting
content popularity in the near future. However, at the network edge, content
popularity can be extremely dynamic due to diverse user content retrieval
behaviors and the low-degree of user multiplexing. It's challenging for the
traditional reactive caching systems to keep up with the dynamic content
popularity patterns. In this paper, we propose a novel Predictive Edge Caching
(PEC) system that predicts the future content popularity using fine-grained
learning models that mine sequential patterns in user content retrieval
behaviors, and opportunistically prefetches contents predicted to be popular in
the near future using idle network bandwidth. Through extensive experiments
driven by real content retrieval traces, we demonstrate that PEC can adapt to
highly dynamic content popularity, and significantly improve cache hit ratio
and reduce user content retrieval latency over the state-of-art caching
policies. More broadly, our study demonstrates that edge caching performance
can be boosted by deep mining of user content retrieval behaviors.
Related papers
- Efficient Inference of Vision Instruction-Following Models with Elastic Cache [76.44955111634545]
We introduce Elastic Cache, a novel strategy for efficient deployment of instruction-following large vision-language models.
We propose an importance-driven cache merging strategy to prune redundancy caches.
For instruction encoding, we utilize the frequency to evaluate the importance of caches.
Results on a range of LVLMs demonstrate that Elastic Cache not only boosts efficiency but also notably outperforms existing pruning methods in language generation.
arXiv Detail & Related papers (2024-07-25T15:29:05Z) - Semantics-enhanced Temporal Graph Networks for Content Caching and
Energy Saving [21.693946854653785]
We propose a reformative temporal graph network, named STGN, that utilizes extra semantic messages to enhance the temporal and structural learning of a DGNN model.
We also propose a user-specific attention mechanism to fine-grainedly aggregate various semantics.
arXiv Detail & Related papers (2023-01-29T04:17:32Z) - Multi-Content Time-Series Popularity Prediction with Multiple-Model
Transformers in MEC Networks [34.44384973176474]
Coded/uncoded content placement in Mobile Edge Caching (MEC) has evolved to meet the significant growth of global mobile data traffic.
Most existing datadriven popularity prediction models are not suitable for the coded/uncoded content placement frameworks.
We develop a Multiple-model (hybrid) Transformer-based Edge Caching (MTEC) framework with higher generalization ability.
arXiv Detail & Related papers (2022-10-12T02:24:49Z) - Content Popularity Prediction in Fog-RANs: A Clustered Federated
Learning Based Approach [66.31587753595291]
We propose a novel mobility-aware popularity prediction policy, which integrates content popularities in terms of local users and mobile users.
For local users, the content popularity is predicted by learning the hidden representations of local users and contents.
For mobile users, the content popularity is predicted via user preference learning.
arXiv Detail & Related papers (2022-06-13T03:34:00Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z) - A Survey of Deep Learning for Data Caching in Edge Network [1.9798034349981157]
This paper summarizes the utilization of deep learning for data caching in edge network.
We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure.
Then, a number of key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning as well as reinforcement learning.
arXiv Detail & Related papers (2020-08-17T12:02:32Z) - Caching Placement and Resource Allocation for Cache-Enabling UAV NOMA
Networks [87.6031308969681]
This article investigates the cache-enabling unmanned aerial vehicle (UAV) cellular networks with massive access capability supported by non-orthogonal multiple access (NOMA)
We formulate the long-term caching placement and resource allocation optimization problem for content delivery delay minimization as a Markov decision process (MDP)
We propose a Q-learning based caching placement and resource allocation algorithm, where the UAV learns and selects action with emphsoft $varepsilon$-greedy strategy to search for the optimal match between actions and states.
arXiv Detail & Related papers (2020-08-12T08:33:51Z) - Reinforcement Learning for Caching with Space-Time Popularity Dynamics [61.55827760294755]
caching is envisioned to play a critical role in next-generation networks.
To intelligently prefetch and store contents, a cache node should be able to learn what and when to cache.
This chapter presents a versatile reinforcement learning based approach for near-optimal caching policy design.
arXiv Detail & Related papers (2020-05-19T01:23:51Z) - PA-Cache: Evolving Learning-Based Popularity-Aware Content Caching in
Edge Networks [14.939950326112045]
We propose an evolving learning-based content caching policy, named PA-Cache in edge networks.
It adaptively learns time-varying content popularity and determines which contents should be replaced when the cache is full.
We extensively evaluate the performance of our proposed PA-Cache on real-world traces from a large online video-on-demand service provider.
arXiv Detail & Related papers (2020-02-20T15:38:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.