Last-Level Cache Side-Channel Attacks Are Feasible in the Modern Public Cloud (Extended Version)
- URL: http://arxiv.org/abs/2405.12469v1
- Date: Tue, 21 May 2024 03:05:29 GMT
- Title: Last-Level Cache Side-Channel Attacks Are Feasible in the Modern Public Cloud (Extended Version)
- Authors: Zirui Neil Zhao, Adam Morrison, Christopher W. Fletcher, Josep Torrellas,
- Abstract summary: We present an end-to-end, cross-tenant attack on a vulnerable ECDSA implementation in the public F-based algorithm for Google Cloud Run environment.
We introduce several new techniques to improve every step of the attack.
Overall, we extract a median value of 81% of the secret ECDSA nonce bits from a victim container in 19 seconds on average.
- Score: 16.594665501866675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Last-level cache side-channel attacks have been mostly demonstrated in highly-controlled, quiescent local environments. Hence, it is unclear whether such attacks are feasible in a production cloud environment. In the cloud, side channels are flooded with noise from activities of other tenants and, in Function-as-a-Service (FaaS) workloads, the attacker has a very limited time window to mount the attack. In this paper, we show that such attacks are feasible in practice, although they require new techniques. We present an end-to-end, cross-tenant attack on a vulnerable ECDSA implementation in the public FaaS Google Cloud Run environment. We introduce several new techniques to improve every step of the attack. First, to speed-up the generation of eviction sets, we introduce L2-driven candidate address filtering and a Binary Search-based algorithm for address pruning. Second, to monitor victim memory accesses with high time resolution, we introduce Parallel Probing. Finally, we leverage power spectral density from signal processing to easily identify the victim's target cache set in the frequency domain. Overall, using these mechanisms, we extract a median value of 81% of the secret ECDSA nonce bits from a victim container in 19 seconds on average.
Related papers
- Security Testbed for Preempting Attacks against Supercomputing Infrastructure [1.9097277955963794]
This paper describes a security testbed embedded in live traffic of a supercomputer at the National Center for Supercomputing Applications.
The objective is to demonstrate attack textitpreemption, i.e., stopping system compromise and data breaches at petascale supercomputers.
arXiv Detail & Related papers (2024-09-15T03:42:47Z) - Dynamic Frequency-Based Fingerprinting Attacks against Modern Sandbox Environments [7.753621963239778]
We investigate the possibility of fingerprinting containers through CPU frequency reporting sensors in Intel and AMD CPUs.
We demonstrate that Docker images exhibit a unique frequency signature, enabling the distinction of different containers with up to 84.5% accuracy.
Our empirical results show that these attacks can also be carried out successfully against all of these sandboxes in less than 40 seconds.
arXiv Detail & Related papers (2024-04-16T16:45:47Z) - Privacy preserving layer partitioning for Deep Neural Network models [0.21470800327528838]
Trusted Execution Environments (TEEs) can introduce significant performance overhead due to additional layers of encryption, decryption, security and integrity checks.
We introduce layer partitioning technique and offloading computations to GPU.
We conduct experiments to demonstrate the effectiveness of our approach in protecting against input reconstruction attacks developed using trained conditional Generative Adversarial Network(c-GAN)
arXiv Detail & Related papers (2024-04-11T02:39:48Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Carry Your Fault: A Fault Propagation Attack on Side-Channel Protected LWE-based KEM [12.164927192334748]
We propose a new fault attack on side-channel secure masked implementation of LWE-based key-encapsulation mechanisms.
We exploit the data dependency of the adder carry chain in A2B and extract sensitive information.
We show key recovery attacks of Kyber, although the leakage also exists for other schemes like Saber.
arXiv Detail & Related papers (2024-01-25T11:18:43Z) - ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints [3.042299765078767]
We show how an offline component serves to warm up the online algorithm, making it possible to generate highly successful attacks under time constraints.
This paper introduces a new problem: how do we generate adversarial noise under real-time constraints to support such real-time adversarial attacks?
arXiv Detail & Related papers (2022-01-05T14:03:26Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Transferable Sparse Adversarial Attack [62.134905824604104]
We introduce a generator architecture to alleviate the overfitting issue and thus efficiently craft transferable sparse adversarial examples.
Our method achieves superior inference speed, 700$times$ faster than other optimization-based methods.
arXiv Detail & Related papers (2021-05-31T06:44:58Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.