Yet Another Mirage of Breaking MIRAGE: Debunking Occupancy-based Side-Channel Attacks on Fully Associative Randomized Caches
- URL: http://arxiv.org/abs/2508.10431v3
- Date: Sun, 31 Aug 2025 05:43:55 GMT
- Title: Yet Another Mirage of Breaking MIRAGE: Debunking Occupancy-based Side-Channel Attacks on Fully Associative Randomized Caches
- Authors: Chris Cao, Gururaj Saileshwar,
- Abstract summary: Recent work presented at USENIX Security 2025 claims that occupancy-based attacks can recover AES keys from the MIRAGE randomized cache.<n>In this paper, we examine these claims and find that they arise from a modeling flaw in the SEC'25 paper.
- Score: 3.036596192442501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work presented at USENIX Security 2025 (SEC'25) claims that occupancy-based attacks can recover AES keys from the MIRAGE randomized cache. In this paper, we examine these claims and find that they arise from a modeling flaw in the SEC'25 paper. Most critically, the SEC'25 paper's simulation of MIRAGE uses a constant seed to initialize the random number generator used for global evictions in MIRAGE, causing every AES encryption they trace to evict the same deterministic sequence of cache lines. This artificially creates a highly repeatable timing pattern that is not representative of a realistic implementation of MIRAGE, where eviction sequences vary randomly between encryptions. When we instead randomize the eviction seed for each run, reflecting realistic operation, the correlation between AES T-table accesses and attacker runtimes disappears, and the attack fails. These findings show that the reported leakage is an artifact of incorrect modeling, and not an actual vulnerability in MIRAGE.
Related papers
- Improved Pseudorandom Codes from Permuted Puzzles [28.363531214070196]
We construct pseudorandom codes that achieve subexponential pseudorandomness security, robustness to worst-case edits over a binary alphabet, and robustness against even computationally unbounded adversaries.<n>We show that this conjecture is implied by the permuted puzzles conjecture used previously to construct doubly efficient private information retrieval.
arXiv Detail & Related papers (2025-12-09T18:53:42Z) - Can Transformer Memory Be Corrupted? Investigating Cache-Side Vulnerabilities in Large Language Models [0.0]
This paper introduces Malicious Token Injection (MTI), a modular framework that perturbs cached key vectors at selected layers and timesteps through controlled magnitude and frequency.<n> Empirical results show that MTI significantly alters next-token distributions and downstream task performance across GPT-2 and LLaMA-2/7B, as well as destabilizes retrieval-augmented and agentic reasoning pipelines.
arXiv Detail & Related papers (2025-10-20T02:04:18Z) - Addendum: Systematic Evaluation of Randomized Cache Designs against Cache Occupancy [11.596892993188938]
This note is meant as an addendum to the main text published at USENIX Security 2025.<n>In this note, we argue that L1d cache size plays a role in adversarial success, and that a patched version of MIRAGE with randomized initial seeding of global eviction map prevents leakage of AES key.
arXiv Detail & Related papers (2025-10-19T15:13:25Z) - Collusion-Driven Impersonation Attack on Channel-Resistant RF Fingerprinting [15.295614131186142]
We propose a collusion-driven impersonation attack that achieves deceptive RF-level mimicry.<n>The proposed attack scheme essentially maintains a success rate of over 95% under different channel conditions.
arXiv Detail & Related papers (2025-09-26T10:13:34Z) - Unveiling ECC Vulnerabilities: LSTM Networks for Operation Recognition in Side-Channel Attacks [6.373405051241682]
We propose a novel approach for performing side-channel attacks on elliptic curve cryptography.<n>We adopt a long-short-term memory (LSTM) neural network to analyze a power trace and identify patterns of operation.<n>We show that current countermeasures, specifically the coordinate randomization technique, are not sufficient to protect against side channels.
arXiv Detail & Related papers (2025-02-24T17:02:40Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)<n>To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - The Illusion of Randomness: An Empirical Analysis of Address Space Layout Randomization Implementations [4.939948478457799]
Real-world implementations of Address Space Layout Randomization are imperfect and subject to weaknesses that attackers can exploit.
This work evaluates the effectiveness of ASLR on major desktop platforms, including Linux, and Windows.
We find a significant entropy reduction in the entropy of libraries after the Linux 5.18 version and identify correlation paths that an attacker could leverage to reduce exploitation complexity significantly.
arXiv Detail & Related papers (2024-08-27T14:46:04Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Systematic Evaluation of Randomized Cache Designs against Cache Occupancy [11.018866935621045]
This work fills in a crucial gap in current literature on randomized caches.<n>Most randomized cache designs defend only contention-based attacks, and leave out considerations of cache occupancy.<n>Our results establish the need to also consider cache occupancy side-channel in randomized cache design considerations.
arXiv Detail & Related papers (2023-10-08T14:06:06Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Security and Privacy Enhanced Gait Authentication with Random
Representation Learning and Digital Lockers [3.3549957463189095]
Gait data captured by inertial sensors have demonstrated promising results on user authentication.
Most existing approaches stored the enrolled gait pattern insecurely for matching with the pattern, thus, posed critical security and privacy issues.
We present a gait cryptosystem that generates from gait data the random key for user authentication, meanwhile, secures the gait pattern.
arXiv Detail & Related papers (2021-08-05T06:34:42Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Recovering AES Keys with a Deep Cold Boot Attack [91.22679787578438]
Cold boot attacks inspect the corrupted random access memory soon after the power has been shut down.
In this work, we combine a novel cryptographic variant of a deep error correcting code technique with a modified SAT solver scheme to apply the attack on AES keys.
Our results show that our methods outperform the state of the art attack methods by a very large margin.
arXiv Detail & Related papers (2021-06-09T07:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.