HASI: Hardware-Accelerated Stochastic Inference, A Defense Against
Adversarial Machine Learning Attacks
- URL: http://arxiv.org/abs/2106.05825v1
- Date: Wed, 9 Jun 2021 14:31:28 GMT
- Title: HASI: Hardware-Accelerated Stochastic Inference, A Defense Against
Adversarial Machine Learning Attacks
- Authors: Mohammad Hossein Samavatian, Saikat Majumdar, Kristin Barber, Radu
Teodorescu
- Abstract summary: This paper presents HASI, a hardware-accelerated defense that uses a process we call inference to detect adversarial inputs.
We show an adversarial detection rate of average 87% which exceeds the detection rate of the state-of-the-art approaches.
- Score: 1.9212368803706579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: DNNs are known to be vulnerable to so-called adversarial attacks, in which
inputs are carefully manipulated to induce misclassification. Existing defenses
are mostly software-based and come with high overheads or other limitations.
This paper presents HASI, a hardware-accelerated defense that uses a process we
call stochastic inference to detect adversarial inputs. HASI carefully injects
noise into the model at inference time and used the model's response to
differentiate adversarial inputs from benign ones. We show an adversarial
detection rate of average 87% which exceeds the detection rate of the
state-of-the-art approaches, with a much lower overhead. We demonstrate a
software/hardware-accelerated co-design, which reduces the performance impact
of stochastic inference to 1.58X-2X relative to the unprotected baseline,
compared to 14X-20X overhead for a software-only GPU implementation.
Related papers
- Improving Adversarial Robustness in Android Malware Detection by Reducing the Impact of Spurious Correlations [3.7937308360299116]
Machine learning (ML) has demonstrated significant advancements in Android malware detection (AMD)
However, the resilience of ML against realistic evasion attacks remains a major obstacle for AMD.
In this study, we propose a domain adaptation technique to improve the generalizability of AMD by aligning the distribution of malware samples and AEs.
arXiv Detail & Related papers (2024-08-27T17:01:12Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - PAD: Towards Principled Adversarial Malware Detection Against Evasion
Attacks [17.783849474913726]
We propose a new adversarial training framework, termed Principled Adversarial Malware Detection (PAD)
PAD lays on a learnable convex measurement that quantifies distribution-wise discrete perturbations to protect malware detectors from adversaries.
PAD can harden ML-based malware detection against 27 evasion attacks with detection accuracies greater than 83.45%.
It matches or outperforms many anti-malware scanners in VirusTotal against realistic adversarial malware.
arXiv Detail & Related papers (2023-02-22T12:24:49Z) - DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards
Secure Industrial Internet of Things Analytics [8.697883716452385]
We propose a double defense mechanism to detect and mitigate adversarial attacks in I-IoT environments.
We first detect if there is an adversarial attack on a given sample using novelty detection algorithms.
If there is an attack, adversarial retraining provides a more robust model, while we apply standard training for regular samples.
arXiv Detail & Related papers (2023-01-23T22:10:40Z) - DNNShield: Dynamic Randomized Model Sparsification, A Defense Against
Adversarial Machine Learning [2.485182034310304]
We propose a hardware-accelerated defense against machine learning attacks.
DNNSHIELD adapts the strength of the response to the confidence of the adversarial input.
We show an adversarial detection rate of 86% when applied to VGG16 and 88% when applied to ResNet50.
arXiv Detail & Related papers (2022-07-31T19:29:44Z) - Using Undervolting as an On-Device Defense Against Adversarial Machine
Learning Attacks [1.9212368803706579]
We propose a novel, lightweight adversarial correction and/or detection mechanism for image classifiers.
We show that these errors disrupt the adversarial input in a way that can be used either to correct the classification or detect the input as adversarial.
arXiv Detail & Related papers (2021-07-20T23:21:04Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.