DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips
- URL: http://arxiv.org/abs/2003.13746v1
- Date: Mon, 30 Mar 2020 18:51:59 GMT
- Title: DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips
- Authors: Fan Yao, Adnan Siraj Rakin, Deliang Fan
- Abstract summary: We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
- Score: 29.34622626909906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Security of machine learning is increasingly becoming a major concern due to
the ubiquitous deployment of deep learning in many security-sensitive domains.
Many prior studies have shown external attacks such as adversarial examples
that tamper with the integrity of DNNs using maliciously crafted inputs.
However, the security implication of internal threats (i.e., hardware
vulnerability) to DNN models has not yet been well understood. In this paper,
we demonstrate the first hardware-based attack on quantized deep neural
networks-DeepHammer-that deterministically induces bit flips in model weights
to compromise DNN inference by exploiting the rowhammer vulnerability.
DeepHammer performs aggressive bit search in the DNN model to identify the most
vulnerable weight bits that are flippable under system constraints. To trigger
deterministic bit flips across multiple pages within reasonable amount of time,
we develop novel system-level techniques that enable fast deployment of victim
pages, memory-efficient rowhammering and precise flipping of targeted bits.
DeepHammer can deliberately degrade the inference accuracy of the victim DNN
system to a level that is only as good as random guess, thus completely
depleting the intelligence of targeted DNN systems. We systematically
demonstrate our attacks on real systems against 12 DNN architectures with 4
different datasets and different application domains. Our evaluation shows that
DeepHammer is able to successfully tamper DNN inference behavior at run-time
within a few minutes. We further discuss several mitigation techniques from
both algorithm and system levels to protect DNNs against such attacks. Our work
highlights the need to incorporate security mechanisms in future deep learning
system to enhance the robustness of DNN against hardware-based deterministic
fault injections.
Related papers
- DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification [46.47446944218544]
This paper introduces DNNShield, a novel approach for protection of Deep Neural Networks (DNNs)
DNNShield embeds unique identifiers within the model architecture using specialized protection layers.
We validate the effectiveness and efficiency of DNNShield through extensive evaluations across three datasets and four model architectures.
arXiv Detail & Related papers (2024-03-11T10:27:36Z) - DNN-Defender: A Victim-Focused In-DRAM Defense Mechanism for Taming Adversarial Weight Attack on DNNs [10.201050807991175]
We present the first DRAM-based victim-focused defense mechanism tailored for quantized Deep Neural Networks (DNNs)
DNN-Defender can deliver a high level of protection downgrading the performance of targeted RowHammer attacks to a random attack level.
The proposed defense has no accuracy drop on CIFAR-10 and ImageNet datasets without requiring any software training or incurring hardware overhead.
arXiv Detail & Related papers (2023-05-14T00:30:58Z) - Black-box Safety Analysis and Retraining of DNNs based on Feature
Extraction and Clustering [0.9590956574213348]
We propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors.
It relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images.
It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error.
arXiv Detail & Related papers (2022-01-13T17:02:57Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Deep Serial Number: Computational Watermarking for DNN Intellectual
Property Protection [53.40245698216239]
DSN (Deep Serial Number) is a watermarking algorithm designed specifically for deep neural networks (DNNs)
Inspired by serial numbers in safeguarding conventional software IP, we propose the first implementation of serial number embedding within DNNs.
arXiv Detail & Related papers (2020-11-17T21:42:40Z) - DeepDyve: Dynamic Verification for Deep Neural Networks [16.20238078882485]
DeepDyve employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification.
We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve.
arXiv Detail & Related papers (2020-09-21T07:58:18Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - NeuroAttack: Undermining Spiking Neural Networks Security through
Externally Triggered Bit-Flips [11.872768663147776]
Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems.
While these systems are going mainstream, they have inherent security and reliability issues.
We propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues.
arXiv Detail & Related papers (2020-05-16T16:54:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.