DRAM-Profiler: An Experimental DRAM RowHammer Vulnerability Profiling   Mechanism
        - URL: http://arxiv.org/abs/2404.18396v1
 - Date: Mon, 29 Apr 2024 03:15:59 GMT
 - Title: DRAM-Profiler: An Experimental DRAM RowHammer Vulnerability Profiling   Mechanism
 - Authors: Ranyang Zhou, Jacqueline T. Liu, Nakul Kochar, Sabbir Ahmed, Adnan Siraj Rakin, Shaahin Angizi, 
 - Abstract summary: This paper presents a low-overhead DRAM RowHammer vulnerability profiling technique termed DRAM-Profiler.
The proposed test vectors intentionally weaken the spatial correlation between the aggressors and victim rows before an attack for evaluation.
The results uncover the significant variability among chips from different manufacturers in the type and quantity of RowHammer attacks that can be exploited by adversaries.
 - Score: 8.973443004379561
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   RowHammer stands out as a prominent example, potentially the pioneering one, showcasing how a failure mechanism at the circuit level can give rise to a significant and pervasive security vulnerability within systems. Prior research has approached RowHammer attacks within a static threat model framework. Nonetheless, it warrants consideration within a more nuanced and dynamic model. This paper presents a low-overhead DRAM RowHammer vulnerability profiling technique termed DRAM-Profiler, which utilizes innovative test vectors for categorizing memory cells into distinct security levels. The proposed test vectors intentionally weaken the spatial correlation between the aggressors and victim rows before an attack for evaluation, thus aiding designers in mitigating RowHammer vulnerabilities in the mapping phase. While there has been no previous research showcasing the impact of such profiling to our knowledge, our study methodically assesses 128 commercial DDR4 DRAM products. The results uncover the significant variability among chips from different manufacturers in the type and quantity of RowHammer attacks that can be exploited by adversaries. 
 
       
      
        Related papers
        - Exploiting Edge Features for Transferable Adversarial Attacks in   Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv  Detail & Related papers  (2025-07-09T20:09:00Z) - Rubber Mallet: A Study of High Frequency Localized Bit Flips and Their   Impact on Security [6.177931523699345]
The density of modern DRAM has heightened its vulnerability to Rowhammer attacks, which induce bit flips by repeatedly accessing specific memory rows.<n>This paper presents an analysis of bit flip patterns generated by advanced Rowhammer techniques that bypass existing hardware defenses.
arXiv  Detail & Related papers  (2025-05-02T18:07:07Z) - Understanding and Mitigating Side and Covert Channel Vulnerabilities   Introduced by RowHammer Defenses [6.52467000790105]
We introduce LeakyHammer, a new class of attacks that leverage the RowHammer mitigation-induced memory latency differences to establish communication channels and leak secrets.
We show that fundamentally mitigating LeakyHammer induces large overheads in highly RowHammer-vulnerable systems.
arXiv  Detail & Related papers  (2025-03-23T00:26:47Z) - Understanding RowHammer Under Reduced Refresh Latency: Experimental   Analysis of Real DRAM Chips and Implications on Future Solutions [6.157443107603247]
RowHammer is a read disturbance mechanism in DRAM where repeatedly accessing (hammering) a row of DRAM cells (DRAM row) induces bitflips in physically nearby DRAM rows (victim rows)
With newer DRAM chip generations, these mechanisms perform preventive refresh more aggressively and cause larger performance, energy, or area overheads.
We present the first rigorous experimental study on the interactions between refresh latency and RowHammer characteristics in real DRAM chips.
Our results show that Partial Charge Restoration for Aggressive Mitigation (PaCRAM) reduces the performance and energy overheads induced by five state-of-the-art RowHammer mitigation mechanisms with
arXiv  Detail & Related papers  (2025-02-17T12:39:03Z) - DAPPER: A Performance-Attack-Resilient Tracker for RowHammer Defense [1.1816942730023883]
RowHammer vulnerabilities pose a significant threat to modern DRAM-based systems.
Perf-Attacks exploit shared structures to reduce DRAM bandwidth for co-running benign applications.
We propose secure hashing mechanisms to thwart adversarial attempts to capture the mapping of shared structures.
arXiv  Detail & Related papers  (2025-01-31T02:38:53Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart   Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv  Detail & Related papers  (2024-03-26T08:51:23Z) - Sparse and Transferable Universal Singular Vectors Attack [5.498495800909073]
We propose a novel sparse universal white-box adversarial attack.
Our approach is based on truncated power providing sparsity to $(p,q)$-singular vectors of the hidden layers of Jacobian matrices.
Our findings demonstrate the vulnerability of state-of-the-art models to sparse attacks and highlight the importance of developing robust machine learning systems.
arXiv  Detail & Related papers  (2024-01-25T09:21:29Z) - Towards Robust Semantic Segmentation against Patch-based Attack via   Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv  Detail & Related papers  (2024-01-03T13:58:35Z) - Threshold Breaker: Can Counter-Based RowHammer Prevention Mechanisms   Truly Safeguard DRAM? [8.973443004379561]
This paper experimentally demonstrates a novel multi-sided fault injection attack technique called Threshold Breaker.
It can effectively bypass the most advanced counter-based defense mechanisms by soft-attacking the rows at a farther physical distance from the target rows.
As a case study, we compare the performance efficiency between our mechanism and a well-known double-sided attack by performing adversarial weight attacks on a modern Deep Neural Network (DNN)
arXiv  Detail & Related papers  (2023-11-28T03:36:17Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv  Detail & Related papers  (2023-10-25T03:30:42Z) - One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training [54.622474306336635]
A new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques.
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
arXiv  Detail & Related papers  (2023-08-12T09:34:43Z) - DNN-Defender: A Victim-Focused In-DRAM Defense Mechanism for Taming   Adversarial Weight Attack on DNNs [10.201050807991175]
We present the first DRAM-based victim-focused defense mechanism tailored for quantized Deep Neural Networks (DNNs)
DNN-Defender can deliver a high level of protection downgrading the performance of targeted RowHammer attacks to a random attack level.
The proposed defense has no accuracy drop on CIFAR-10 and ImageNet datasets without requiring any software training or incurring hardware overhead.
arXiv  Detail & Related papers  (2023-05-14T00:30:58Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
  Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv  Detail & Related papers  (2023-03-20T17:25:22Z) - Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing
  System [14.562718993542964]
We propose two types of hardware-aware attack methods with respect to different attack scenarios and objectives.
The first is adversarial attack, VADER, which perturbs the input samples to mislead the prediction of neural networks.
The second is fault injection attack, EFI, which perturbs the network parameter space such that a specified sample will be classified to a target label.
arXiv  Detail & Related papers  (2023-02-20T10:57:41Z) - Overparameterized Linear Regression under Adversarial Attacks [0.0]
We study the error of linear regression in the face of adversarial attacks.
We show that adding features to linear models might be either a source of additional robustness or brittleness.
arXiv  Detail & Related papers  (2022-04-13T09:50:41Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv  Detail & Related papers  (2020-06-08T20:42:39Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.