BackCache: Mitigating Contention-Based Cache Timing Attacks by Hiding Cache Line Evictions
- URL: http://arxiv.org/abs/2304.10268v5
- Date: Wed, 12 Jun 2024 02:07:44 GMT
- Title: BackCache: Mitigating Contention-Based Cache Timing Attacks by Hiding Cache Line Evictions
- Authors: Quancheng Wang, Xige Zhang, Han Wang, Yuzhe Gu, Ming Tang,
- Abstract summary: L1 data cache attacks pose a significant privacy and confidentiality threat.
BackCache always achieves cache hits instead of cache misses to mitigate contention-based cache timing attacks on the L1 data cache.
BackCache places the evicted cache lines from the L1 data cache into a fully-associative backup cache to hide the evictions.
- Score: 7.46215723037597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Caches are used to reduce the speed differential between the CPU and memory to improve the performance of modern processors. However, attackers can use contention-based cache timing attacks to steal sensitive information from victim processes through carefully designed cache eviction sets. And L1 data cache attacks are widely exploited and pose a significant privacy and confidentiality threat. Existing hardware-based countermeasures mainly focus on cache partitioning, randomization, and cache line flushing, which unfortunately either incur high overhead or can be circumvented by sophisticated attacks. In this paper, we propose a novel hardware-software co-design called BackCache with the idea of always achieving cache hits instead of cache misses to mitigate contention-based cache timing attacks on the L1 data cache. BackCache places the evicted cache lines from the L1 data cache into a fully-associative backup cache to hide the evictions. To improve the security of BackCache, we introduce a randomly used replacement policy (RURP) and a dynamic backup cache resizing mechanism. We also present a theoretical security analysis to demonstrate the effectiveness of BackCache. Our evaluation on the gem5 simulator shows that BackCache can degrade the performance by 2.61%, 2.66%, and 3.36% For OS kernel, single-thread, and multi-thread benchmarks.
Related papers
- Auditing Prompt Caching in Language Model APIs [77.02079451561718]
We investigate the privacy leakage caused by prompt caching in large language models (LLMs)
We detect global cache sharing across users in seven API providers, including OpenAI.
We find evidence that OpenAI's embedding model is a decoder-only Transformer, which was previously not publicly known.
arXiv Detail & Related papers (2025-02-11T18:58:04Z) - FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality [58.80996741843102]
FasterCache is a training-free strategy designed to accelerate the inference of video diffusion models with high-quality generation.
We show that FasterCache can significantly accelerate video generation while keeping video quality comparable to the baseline.
arXiv Detail & Related papers (2024-10-25T07:24:38Z) - RollingCache: Using Runtime Behavior to Defend Against Cache Side Channel Attacks [2.9221371172659616]
We present RollingCache, a cache design that defends against contention attacks by dynamically changing the set of addresses contending for cache sets.
RollingCache does not rely on address encryption/decryption, data relocation, or cache partitioning.
Our solution does not depend on having defined security domains, and can defend against an attacker running on the same or another core.
arXiv Detail & Related papers (2024-08-16T15:11:12Z) - Efficient Inference of Vision Instruction-Following Models with Elastic Cache [76.44955111634545]
We introduce Elastic Cache, a novel strategy for efficient deployment of instruction-following large vision-language models.
We propose an importance-driven cache merging strategy to prune redundancy caches.
For instruction encoding, we utilize the frequency to evaluate the importance of caches.
Results on a range of LVLMs demonstrate that Elastic Cache not only boosts efficiency but also notably outperforms existing pruning methods in language generation.
arXiv Detail & Related papers (2024-07-25T15:29:05Z) - Hidden Web Caches Discovery [3.9272151228741716]
This paper presents a novel methodology for cache detection using timing analysis.
Our approach eliminates the dependency on cache status headers, making it applicable to any web server.
arXiv Detail & Related papers (2024-07-23T08:58:06Z) - SEA Cache: A Performance-Efficient Countermeasure for Contention-based Attacks [4.144828482272047]
We extend an existing secure cache design, CEASER-SH cache, and propose the SEA cache.
The novel cache configurations in both caches are logical associativity, which allows the cache line to be placed not only in its mapped cache set but also in the subsequent cache sets.
Compared to a CEASER-SH cache with logical associativity of 8, an SEA cache with logical associativity of 1 for normal protection users and 16 for high protection users has a Cycles Per Instruction penalty that is about 0.6% less for users under normal protections and provides better security against contention-based attacks
arXiv Detail & Related papers (2024-05-30T13:12:53Z) - Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference [78.65321721142624]
We focus on a memory bottleneck imposed by the key-value ( KV) cache.
Existing KV cache methods approach this problem by pruning or evicting large swaths of relatively less important KV pairs.
We propose LESS, a simple integration of a constant sized cache with eviction-based cache methods.
arXiv Detail & Related papers (2024-02-14T18:54:56Z) - Systematic Evaluation of Randomized Cache Designs against Cache Occupancy [11.018866935621045]
This work fills in a crucial gap in current literature on randomized caches.
Most randomized cache designs defend only contention-based attacks, and leave out considerations of cache occupancy.
Our results establish the need to also consider cache occupancy side-channel in randomized cache design considerations.
arXiv Detail & Related papers (2023-10-08T14:06:06Z) - Random and Safe Cache Architecture to Defeat Cache Timing Attacks [5.142233612851766]
Caches have been exploited to leak secret information due to the different times they take to handle memory accesses.
We present a systematic view of the attack and defense space and show that no existing defense has addressed all cache timing attacks.
We propose Random and Safe (RaS) cache architectures to decorrelate cache state changes from memory requests.
arXiv Detail & Related papers (2023-09-28T05:08:16Z) - Accelerating Deep Learning Classification with Error-controlled
Approximate-key Caching [72.50506500576746]
We propose a novel caching paradigm, that we named approximate-key caching.
While approximate cache hits alleviate DL inference workload and increase the system throughput, they however introduce an approximation error.
We analytically model our caching system performance for classic LRU and ideal caches, we perform a trace-driven evaluation of the expected performance, and we compare the benefits of our proposed approach with the state-of-the-art similarity caching.
arXiv Detail & Related papers (2021-12-13T13:49:11Z) - Reinforcement Learning for Caching with Space-Time Popularity Dynamics [61.55827760294755]
caching is envisioned to play a critical role in next-generation networks.
To intelligently prefetch and store contents, a cache node should be able to learn what and when to cache.
This chapter presents a versatile reinforcement learning based approach for near-optimal caching policy design.
arXiv Detail & Related papers (2020-05-19T01:23:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.