Semantic-preserving Reinforcement Learning Attack Against Graph Neural
Networks for Malware Detection
- URL: http://arxiv.org/abs/2009.05602v3
- Date: Wed, 16 Mar 2022 19:30:48 GMT
- Title: Semantic-preserving Reinforcement Learning Attack Against Graph Neural
Networks for Malware Detection
- Authors: Lan Zhang, Peng Liu, Yoon-Ho Choi, Ping Chen
- Abstract summary: We propose a reinforcement learning-based semantics-preserving attack against black-box GNNs for malware detection.
The proposed attack uses reinforcement learning to automatically make these "how to select" decisions.
- Score: 6.173795262273582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an increasing number of deep-learning-based malware scanners have been
proposed, the existing evasion techniques, including code obfuscation and
polymorphic malware, are found to be less effective. In this work, we propose a
reinforcement learning-based semantics-preserving
(i.e.functionality-preserving) attack against black-box GNNs (GraphNeural
Networks) for malware detection. The key factor of adversarial malware
generation via semantic Nops insertion is to select the appropriate
semanticNopsand their corresponding basic blocks. The proposed attack uses
reinforcement learning to automatically make these "how to select" decisions.
To evaluate the attack, we have trained two kinds of GNNs with five types(i.e.,
Backdoor, Trojan-Downloader, Trojan-Ransom, Adware, and Worm) of Windows
malware samples and various benign Windows programs. The evaluation results
have shown that the proposed attack can achieve a significantly higher evasion
rate than three baseline attacks, namely the semantics-preserving random
instruction insertion attack, the semantics-preserving accumulative instruction
insertion attack, and the semantics-preserving gradient-based instruction
insertion attack.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Explainability-Informed Targeted Malware Misclassification [0.0]
Machine learning models for malware classification into categories have shown promising results.
Deep neural networks have shown vulnerabilities against intentionally crafted adversarial attacks.
Our paper explores such adversarial vulnerabilities of neural network based malware classification system.
arXiv Detail & Related papers (2024-05-07T04:59:19Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers [25.129280695319473]
We show that backdoor attacks in malware classifiers are still detectable by recent defenses.
We propose a new attack, Jigsaw Puzzle, based on the key observation that malware authors have little to no incentive to protect any other authors' malware.
JP learns a trigger to complement the latent patterns of the malware author's samples, and activates the backdoor only when the trigger and the latent pattern are pieced together in a sample.
arXiv Detail & Related papers (2022-02-11T06:15:56Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z) - An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks [59.42357806777537]
trojan attack aims to attack deployed deep neural networks (DNNs) relying on hidden trigger patterns inserted by hackers.
We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset.
The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts compared to conventional trojan attack methods.
arXiv Detail & Related papers (2020-06-15T04:58:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.