Threshold Breaker: Can Counter-Based RowHammer Prevention Mechanisms Truly Safeguard DRAM?
- URL: http://arxiv.org/abs/2311.16460v1
- Date: Tue, 28 Nov 2023 03:36:17 GMT
- Title: Threshold Breaker: Can Counter-Based RowHammer Prevention Mechanisms Truly Safeguard DRAM?
- Authors: Ranyang Zhou, Jacqueline Liu, Sabbir Ahmed, Nakul Kochar, Adnan Siraj Rakin, Shaahin Angizi,
- Abstract summary: This paper experimentally demonstrates a novel multi-sided fault injection attack technique called Threshold Breaker.
It can effectively bypass the most advanced counter-based defense mechanisms by soft-attacking the rows at a farther physical distance from the target rows.
As a case study, we compare the performance efficiency between our mechanism and a well-known double-sided attack by performing adversarial weight attacks on a modern Deep Neural Network (DNN)
- Score: 8.973443004379561
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper challenges the existing victim-focused counter-based RowHammer detection mechanisms by experimentally demonstrating a novel multi-sided fault injection attack technique called Threshold Breaker. This mechanism can effectively bypass the most advanced counter-based defense mechanisms by soft-attacking the rows at a farther physical distance from the target rows. While no prior work has demonstrated the effect of such an attack, our work closes this gap by systematically testing 128 real commercial DDR4 DRAM products and reveals that the Threshold Breaker affects various chips from major DRAM manufacturers. As a case study, we compare the performance efficiency between our mechanism and a well-known double-sided attack by performing adversarial weight attacks on a modern Deep Neural Network (DNN). The results demonstrate that the Threshold Breaker can deliberately deplete the intelligence of the targeted DNN system while DRAM is fully protected.
Related papers
- DRAM-Profiler: An Experimental DRAM RowHammer Vulnerability Profiling Mechanism [8.973443004379561]
This paper presents a low-overhead DRAM RowHammer vulnerability profiling technique termed DRAM-Profiler.
The proposed test vectors intentionally weaken the spatial correlation between the aggressors and victim rows before an attack for evaluation.
The results uncover the significant variability among chips from different manufacturers in the type and quantity of RowHammer attacks that can be exploited by adversaries.
arXiv Detail & Related papers (2024-04-29T03:15:59Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - DNN-Defender: A Victim-Focused In-DRAM Defense Mechanism for Taming Adversarial Weight Attack on DNNs [10.201050807991175]
We present the first DRAM-based victim-focused defense mechanism tailored for quantized Deep Neural Networks (DNNs)
DNN-Defender can deliver a high level of protection downgrading the performance of targeted RowHammer attacks to a random attack level.
The proposed defense has no accuracy drop on CIFAR-10 and ImageNet datasets without requiring any software training or incurring hardware overhead.
arXiv Detail & Related papers (2023-05-14T00:30:58Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing
System [14.562718993542964]
We propose two types of hardware-aware attack methods with respect to different attack scenarios and objectives.
The first is adversarial attack, VADER, which perturbs the input samples to mislead the prediction of neural networks.
The second is fault injection attack, EFI, which perturbs the network parameter space such that a specified sample will be classified to a target label.
arXiv Detail & Related papers (2023-02-20T10:57:41Z) - DNNShield: Dynamic Randomized Model Sparsification, A Defense Against
Adversarial Machine Learning [2.485182034310304]
We propose a hardware-accelerated defense against machine learning attacks.
DNNSHIELD adapts the strength of the response to the confidence of the adversarial input.
We show an adversarial detection rate of 86% when applied to VGG16 and 88% when applied to ResNet50.
arXiv Detail & Related papers (2022-07-31T19:29:44Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.