Learning Forward Reuse Distance
- URL: http://arxiv.org/abs/2007.15859v1
- Date: Fri, 31 Jul 2020 05:57:50 GMT
- Title: Learning Forward Reuse Distance
- Authors: Pengcheng Li, Yongbin Gu
- Abstract summary: Recent advancement of deep learning techniques enables the design of novel intelligent cache replacement policies.
We find that a powerful LSTM-based recurrent neural network model can provide high prediction accuracy based on only a cache trace as input.
Results demonstrate that the new cache policy improves state-of-art practical policies by up to 19.2% and incurs only 2.3% higher miss ratio than OPT on average.
- Score: 1.8777512961936749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Caching techniques are widely used in the era of cloud computing from
applications, such as Web caches to infrastructures, Memcached and memory
caches in computer architectures. Prediction of cached data can greatly help
improve cache management and performance. The recent advancement of deep
learning techniques enables the design of novel intelligent cache replacement
policies. In this work, we propose a learning-aided approach to predict future
data accesses. We find that a powerful LSTM-based recurrent neural network
model can provide high prediction accuracy based on only a cache trace as
input. The high accuracy results from a carefully crafted locality-driven
feature design. Inspired by the high prediction accuracy, we propose a pseudo
OPT policy and evaluate it upon 13 real-world storage workloads from Microsoft
Research. Results demonstrate that the new cache policy improves state-of-art
practical policies by up to 19.2% and incurs only 2.3% higher miss ratio than
OPT on average.
Related papers
- Efficient Inference of Vision Instruction-Following Models with Elastic Cache [76.44955111634545]
We introduce Elastic Cache, a novel strategy for efficient deployment of instruction-following large vision-language models.
We propose an importance-driven cache merging strategy to prune redundancy caches.
For instruction encoding, we utilize the frequency to evaluate the importance of caches.
Results on a range of LVLMs demonstrate that Elastic Cache not only boosts efficiency but also notably outperforms existing pruning methods in language generation.
arXiv Detail & Related papers (2024-07-25T15:29:05Z) - A Learning-Based Caching Mechanism for Edge Content Delivery [2.412158290827225]
5G networks and the rise of the Internet of Things (IoT) are increasingly extending into the network edge.
This shift introduces unique challenges, particularly due to the limited cache storage and the diverse request patterns at the edge.
We introduce HR-Cache, a learning-based caching framework grounded in the principles of Hazard Rate (HR) ordering.
arXiv Detail & Related papers (2024-02-05T08:06:03Z) - Accelerating Deep Learning Classification with Error-controlled
Approximate-key Caching [72.50506500576746]
We propose a novel caching paradigm, that we named approximate-key caching.
While approximate cache hits alleviate DL inference workload and increase the system throughput, they however introduce an approximation error.
We analytically model our caching system performance for classic LRU and ideal caches, we perform a trace-driven evaluation of the expected performance, and we compare the benefits of our proposed approach with the state-of-the-art similarity caching.
arXiv Detail & Related papers (2021-12-13T13:49:11Z) - DAAS: Differentiable Architecture and Augmentation Policy Search [107.53318939844422]
This work considers the possible coupling between neural architectures and data augmentation and proposes an effective algorithm jointly searching for them.
Our approach achieves 97.91% accuracy on CIFAR-10 and 76.6% Top-1 accuracy on ImageNet dataset, showing the outstanding performance of our search algorithm.
arXiv Detail & Related papers (2021-09-30T17:15:17Z) - DEAP Cache: Deep Eviction Admission and Prefetching for Cache [1.201626478128059]
We propose an end to end pipeline to learn all three policies using machine learning.
We take inspiration from the success of pretraining on large corpora to learn specialized embeddings for the task.
We present our approach as a "proof of concept" of learning all three components of cache strategies using machine learning.
arXiv Detail & Related papers (2020-09-19T10:23:15Z) - A Survey of Deep Learning for Data Caching in Edge Network [1.9798034349981157]
This paper summarizes the utilization of deep learning for data caching in edge network.
We first outline the typical research topics in content caching and formulate a taxonomy based on network hierarchical structure.
Then, a number of key types of deep learning algorithms are presented, ranging from supervised learning to unsupervised learning as well as reinforcement learning.
arXiv Detail & Related papers (2020-08-17T12:02:32Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - An Imitation Learning Approach for Cache Replacement [23.03767134871606]
We propose an imitation learning approach to automatically learn cache access patterns.
We train a policy conditioned only on past accesses that accurately approximates Belady's even on diverse and complex access patterns.
Parrot increases cache miss rates by 20% over the current state of the art.
arXiv Detail & Related papers (2020-06-29T17:58:40Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - Reinforcement Learning for Caching with Space-Time Popularity Dynamics [61.55827760294755]
caching is envisioned to play a critical role in next-generation networks.
To intelligently prefetch and store contents, a cache node should be able to learn what and when to cache.
This chapter presents a versatile reinforcement learning based approach for near-optimal caching policy design.
arXiv Detail & Related papers (2020-05-19T01:23:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.