A Survey of Side-Channel Attacks in Context of Cache -- Taxonomies, Analysis and Mitigation
- URL: http://arxiv.org/abs/2312.11094v1
- Date: Mon, 18 Dec 2023 10:46:23 GMT
- Title: A Survey of Side-Channel Attacks in Context of Cache -- Taxonomies, Analysis and Mitigation
- Authors: Ankit Pulkit, Smita Naval, Vijay Laxmi,
- Abstract summary: Cache side-channel attacks are leading as there has been an enormous growth in cache memory size in last decade.
This paper covers the detailed study of cache side-channel attacks and compares different microarchitectures in the context of side-channel attacks.
- Score: 0.12289361708127873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Side-channel attacks have become prominent attack surfaces in cyberspace. Attackers use the side information generated by the system while performing a task. Among the various side-channel attacks, cache side-channel attacks are leading as there has been an enormous growth in cache memory size in last decade, especially Last Level Cache (LLC). The adversary infers the information from the observable behavior of shared cache memory. This paper covers the detailed study of cache side-channel attacks and compares different microarchitectures in the context of side-channel attacks. Our main contributions are: (1) We have summarized the fundamentals and essentials of side-channel attacks and various attack surfaces (taxonomies). We also discussed different exploitation techniques, highlighting their capabilities and limitations. (2) We discussed cache side-channel attacks and analyzed the existing literature on cache side-channel attacks on various parameters like microarchitectures, cross-core exploitation, methodology, target, etc. (3) We discussed the detailed analysis of the existing mitigation strategies to prevent cache side-channel attacks. The analysis includes hardware- and software-based countermeasures, examining their strengths and weaknesses. We also discussed the challenges and trade-offs associated with mitigation strategies. This survey is supposed to provide a deeper understanding of the threats posed by these attacks to the research community with valuable insights into effective defense mechanisms.
Related papers
- Mind the Gap: Detecting Black-box Adversarial Attacks in the Making through Query Update Analysis [3.795071937009966]
Adrial attacks can jeopardize the integrity of Machine Learning (ML) models.
We propose a framework that detects if an adversarial noise instance is being generated.
We evaluate our approach against 8 state-of-the-art attacks, including adaptive attacks.
arXiv Detail & Related papers (2025-03-04T20:25:12Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.
We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.
Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - Hybrid Deep Learning Model for Multiple Cache Side Channel Attacks Detection: A Comparative Analysis [0.0]
Cache side channel attacks leverage weaknesses in shared computational resources.
This study focuses on a specific class of these threats: fingerprinting attacks.
A hybrid deep learning model is proposed for detecting cache side channel attacks.
arXiv Detail & Related papers (2025-01-28T18:14:43Z) - Attacking Slicing Network via Side-channel Reinforcement Learning Attack [9.428116807615407]
We introduce a reinforcement learning-based side-channel cache attack framework specifically designed for network slicing environments.
Our framework dynamically identifies and exploit cache locations storing sensitive information, such as authentication keys and user registration data.
Experimental results showcase the superiority of our approach, achieving a success rate of approximately 95% to 98%.
arXiv Detail & Related papers (2024-09-17T15:07:05Z) - RollingCache: Using Runtime Behavior to Defend Against Cache Side Channel Attacks [2.9221371172659616]
We present RollingCache, a cache design that defends against contention attacks by dynamically changing the set of addresses contending for cache sets.
RollingCache does not rely on address encryption/decryption, data relocation, or cache partitioning.
Our solution does not depend on having defined security domains, and can defend against an attacker running on the same or another core.
arXiv Detail & Related papers (2024-08-16T15:11:12Z) - Poisoning Attacks and Defenses in Recommender Systems: A Survey [39.25402612579371]
Modern recommender systems (RS) have profoundly enhanced user experience across digital platforms, yet they face significant threats from poisoning attacks.
This survey presents a unique perspective by examining these threats through the lens of an attacker.
We detail a systematic pipeline that encompasses four stages of a poisoning attack: setting attack goals, assessing attacker capabilities, analyzing victim architecture, and implementing poisoning strategies.
arXiv Detail & Related papers (2024-06-03T06:08:02Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Penetrating Shields: A Systematic Analysis of Memory Corruption Mitigations in the Spectre Era [6.212464457097657]
We study speculative shield bypass attacks that leverage speculative execution attacks to leak secrets that are critical to the security of memory corruption mitigations.
We present three proof-of-concept attacks targeting an already-deployed mitigation mechanism and two state-of-the-art academic proposals.
arXiv Detail & Related papers (2023-09-08T04:43:33Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Leaky Nets: Recovering Embedded Neural Network Models and Inputs through
Simple Power and Timing Side-Channels -- Attacks and Defenses [4.014351341279427]
We study the side-channel vulnerabilities of embedded neural network implementations by recovering their parameters.
We demonstrate our attacks on popular micro-controller platforms over networks of different precisions.
Countermeasures against timing-based attacks are implemented and their overheads are analyzed.
arXiv Detail & Related papers (2021-03-26T21:28:13Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.