Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors
through Voltage Over-scaling
- URL: http://arxiv.org/abs/2103.06936v1
- Date: Thu, 11 Mar 2021 20:18:40 GMT
- Title: Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors
through Voltage Over-scaling
- Authors: Md Shohidul Islam, Ihsen Alouani, Khaled N. Khasawneh
- Abstract summary: Machine learning-based hardware malware detectors (HMDs) offer a potential game changing advantage in defending systems against malware.
HMDs suffer from adversarial attacks, can be effectively reverse-engineered and subsequently be evaded, allowing malware to hide from detection.
We propose a novel HMDs (Stochastic-HMDs) through approximate computing, which makes HMDs resilient against adversarial evasion attacks.
- Score: 3.5803801804085347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning-based hardware malware detectors (HMDs) offer a potential
game changing advantage in defending systems against malware. However, HMDs
suffer from adversarial attacks, can be effectively reverse-engineered and
subsequently be evaded, allowing malware to hide from detection. We address
this issue by proposing a novel HMDs (Stochastic-HMDs) through approximate
computing, which makes HMDs' inference computation-stochastic, thereby making
HMDs resilient against adversarial evasion attacks. Specifically, we propose to
leverage voltage overscaling to induce stochastic computation in the HMDs
model. We show that such a technique makes HMDs more resilient to both
black-box adversarial attack scenarios, i.e., reverse-engineering and
transferability. Our experimental results demonstrate that Stochastic-HMDs
offer effective defense against adversarial attacks along with by-product power
savings, without requiring any changes to the hardware/software nor to the
HMDs' model, i.e., no retraining or fine tuning is needed. Moreover, based on
recent results in probably approximately correct (PAC) learnability theory, we
show that Stochastic-HMDs are provably more difficult to reverse engineer.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - HMD-Poser: On-Device Real-time Human Motion Tracking from Scalable
Sparse Observations [28.452132601844717]
We propose HMD-Poser, the first unified approach to recover full-body motions using scalable sparse observations from HMD and body-worn IMUs.
A lightweight temporal-spatial feature learning network is proposed in HMD-Poser to guarantee that the model runs in real-time on HMDs.
Extensive experimental results on the challenging AMASS dataset show that HMD-Poser achieves new state-of-the-art results in both accuracy and real-time performance.
arXiv Detail & Related papers (2024-03-06T09:10:36Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - RobustPdM: Designing Robust Predictive Maintenance against Adversarial
Attacks [0.0]
We show that adversarial attacks can cause a severe defect (up to 11X) in the RUL prediction, outperforming the effectiveness of the state-of-the-art PdM attacks by 3X.
We also present a novel approximate adversarial training method to defend against adversarial attacks.
arXiv Detail & Related papers (2023-01-25T20:49:12Z) - RES-HD: Resilient Intelligent Fault Diagnosis Against Adversarial
Attacks Using Hyper-Dimensional Computing [8.697883716452385]
Hyper-dimensional computing (HDC) is a brain-inspired machine learning method.
In this work, we use HDC for intelligent fault diagnosis against different adversarial attacks.
Our experiments show that HDC leads to a more resilient and lightweight learning solution than the state-of-the-art deep learning methods.
arXiv Detail & Related papers (2022-03-14T17:59:17Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Towards Improving the Trustworthiness of Hardware based Malware Detector
using Online Uncertainty Estimation [8.199786326431944]
Hardware-based Malware Detectors (HMDs) using Machine Learning (ML) models have shown promise in detecting malicious workloads.
We propose an ensemble-based approach that quantifies uncertainty in predictions made by ML models of an HMD, when it encounters an unknown workload.
We show that the proposed uncertainty estimator can detect >90% of unknown workloads for the Power-management based HMD.
arXiv Detail & Related papers (2021-03-21T23:55:35Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.