Memory Under Siege: A Comprehensive Survey of Side-Channel Attacks on Memory
- URL: http://arxiv.org/abs/2505.04896v1
- Date: Thu, 08 May 2025 02:16:08 GMT
- Title: Memory Under Siege: A Comprehensive Survey of Side-Channel Attacks on Memory
- Authors: MD Mahady Hassan, Shanto Roy, Reza Rahaeimehr,
- Abstract summary: Side-channel attacks on memory (SCAM) exploit unintended data leaks from memory subsystems to infer sensitive information.<n>This research aims to examine SCAM, classify various attack techniques, and evaluate existing defense mechanisms.
- Score: 0.210674772139335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Side-channel attacks on memory (SCAM) exploit unintended data leaks from memory subsystems to infer sensitive information, posing significant threats to system security. These attacks exploit vulnerabilities in memory access patterns, cache behaviors, and other microarchitectural features to bypass traditional security measures. The purpose of this research is to examine SCAM, classify various attack techniques, and evaluate existing defense mechanisms. It guides researchers and industry professionals in improving memory security and mitigating emerging threats. We begin by identifying the major vulnerabilities in the memory system that are frequently exploited in SCAM, such as cache timing, speculative execution, \textit{Rowhammer}, and other sophisticated approaches. Next, we outline a comprehensive taxonomy that systematically classifies these attacks based on their types, target systems, attack vectors, and adversarial capabilities required to execute them. In addition, we review the current landscape of mitigation strategies, emphasizing their strengths and limitations. This work aims to provide a comprehensive overview of memory-based side-channel attacks with the goal of providing significant insights for researchers and practitioners to better understand, detect, and mitigate SCAM risks.
Related papers
- Beyond Vulnerabilities: A Survey of Adversarial Attacks as Both Threats and Defenses in Computer Vision Systems [5.787505062263962]
Adversarial attacks against computer vision systems have emerged as a critical research area that challenges the fundamental assumptions about neural network robustness and security.<n>This comprehensive survey examines the evolving landscape of adversarial techniques, revealing their dual nature as both sophisticated security threats and valuable defensive tools.
arXiv Detail & Related papers (2025-08-03T17:02:05Z) - A Survey on Model Extraction Attacks and Defenses for Large Language Models [55.60375624503877]
Model extraction attacks pose significant security threats to deployed language models.<n>This survey provides a comprehensive taxonomy of extraction attacks and defenses, categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks.<n>We examine defense mechanisms organized into model protection, data privacy protection, and prompt-targeted strategies, evaluating their effectiveness across different deployment scenarios.
arXiv Detail & Related papers (2025-06-26T22:02:01Z) - LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures [49.1574468325115]
This survey seeks to define and categorize the various attacks targeting large language models (LLMs)<n>A thorough analysis of these attacks is presented, alongside an exploration of defense mechanisms designed to mitigate such threats.
arXiv Detail & Related papers (2025-05-02T10:35:26Z) - Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense [90.71884758066042]
Large vision-language models (LVLMs) introduce a unique vulnerability: susceptibility to malicious attacks via visual inputs.<n>We propose ESIII (Embedding Security Instructions Into Images), a novel methodology for transforming the visual space from a source of vulnerability into an active defense mechanism.
arXiv Detail & Related papers (2025-03-14T17:39:45Z) - SPECTRE: A Hybrid System for an Adaptative and Optimised Cyber Threats Detection, Response and Investigation in Volatile Memory [0.0]
This research introduces SPECTRE, a modular Cyber Incident Response System designed to enhance threat detection, investigation, and visualization.<n>Its capabilities safely replicate realistic attack scenarios, such as credential dumping and malicious process injections, for controlled experimentation.<n>SPECTRE advanced visualization techniques transform raw memory data into actionable insights, aiding Red, Blue and Purple teams in refining strategies and responding effectively to threats.
arXiv Detail & Related papers (2025-01-07T16:05:27Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - A Survey of Side-Channel Attacks in Context of Cache -- Taxonomies, Analysis and Mitigation [0.12289361708127873]
Cache side-channel attacks are leading as there has been an enormous growth in cache memory size in last decade.
This paper covers the detailed study of cache side-channel attacks and compares different microarchitectures in the context of side-channel attacks.
arXiv Detail & Related papers (2023-12-18T10:46:23Z) - Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems [8.584570228761503]
transferable attacks pose severe risks to security, privacy, and system integrity.<n>This survey delivers the first comprehensive review of transferable attacks across seven major categories.<n>We examine both the underlying mechanics and practical implications of transferable attacks on AI systems.
arXiv Detail & Related papers (2023-11-20T14:29:45Z) - Untargeted White-box Adversarial Attack with Heuristic Defence Methods
in Real-time Deep Learning based Network Intrusion Detection System [0.0]
In Adversarial Machine Learning (AML), malicious actors aim to fool the Machine Learning (ML) and Deep Learning (DL) models to produce incorrect predictions.
AML is an emerging research domain, and it has become a necessity for the in-depth study of adversarial attacks.
We implement four powerful adversarial attack techniques, namely, Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) in NIDS.
arXiv Detail & Related papers (2023-10-05T06:32:56Z) - Penetrating Shields: A Systematic Analysis of Memory Corruption Mitigations in the Spectre Era [6.212464457097657]
We study speculative shield bypass attacks that leverage speculative execution attacks to leak secrets that are critical to the security of memory corruption mitigations.
We present three proof-of-concept attacks targeting an already-deployed mitigation mechanism and two state-of-the-art academic proposals.
arXiv Detail & Related papers (2023-09-08T04:43:33Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.