FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign
- URL: http://arxiv.org/abs/2305.12770v1
- Date: Mon, 22 May 2023 06:58:34 GMT
- Title: FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign
- Authors: Kun Li and Fan Zhang and Wei Guo
- Abstract summary: Adversarial attacks are to deceive the deep learning model by generating adversarial samples.
This paper proposes FGAM (Fast Generate Adversarial Malware), a method for fast generating adversarial malware.
It is experimentally verified that the success rate of the adversarial malware deception model generated by FGAM is increased by about 84% compared with existing methods.
- Score: 16.16005518623829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malware detection models based on deep learning have been widely used, but
recent research shows that deep learning models are vulnerable to adversarial
attacks. Adversarial attacks are to deceive the deep learning model by
generating adversarial samples. When adversarial attacks are performed on the
malware detection model, the attacker will generate adversarial malware with
the same malicious functions as the malware, and make the detection model
classify it as benign software. Studying adversarial malware generation can
help model designers improve the robustness of malware detection models. At
present, in the work on adversarial malware generation for byte-to-image
malware detection models, there are mainly problems such as large amount of
injection perturbation and low generation efficiency. Therefore, this paper
proposes FGAM (Fast Generate Adversarial Malware), a method for fast generating
adversarial malware, which iterates perturbed bytes according to the gradient
sign to enhance adversarial capability of the perturbed bytes until the
adversarial malware is successfully generated. It is experimentally verified
that the success rate of the adversarial malware deception model generated by
FGAM is increased by about 84\% compared with existing methods.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - On the Effectiveness of Adversarial Samples against Ensemble
Learning-based Windows PE Malware Detectors [0.0]
We propose a mutation system to counteract ensemble learning-based detectors by combining GANs and an RL model.
In the FeaGAN model, ensemble learning is utilized to enhance the malware detector's evasion ability, with the generated adversarial patterns.
arXiv Detail & Related papers (2023-09-25T02:57:27Z) - The Power of MEME: Adversarial Malware Creation with Model-Based
Reinforcement Learning [0.7614628596146599]
This work proposes a new algorithm that combines Malware Evasion and Model Extraction attacks.
MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples.
It produces evasive malware with an evasion rate in the range of 32-73%.
arXiv Detail & Related papers (2023-08-31T08:55:27Z) - A Comparison of Adversarial Learning Techniques for Malware Detection [1.2289361708127875]
We use gradient-based, evolutionary algorithm-based, and reinforcement-based methods to generate adversarial samples.
Experiments show that the Gym-malware generator, which uses a reinforcement learning approach, has the greatest practical potential.
arXiv Detail & Related papers (2023-08-19T09:22:32Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Mal2GCN: A Robust Malware Detection Approach Using Deep Graph
Convolutional Networks With Non-Negative Weights [1.3190581566723918]
We present a black-box source code-based adversarial malware generation approach that can be used to evaluate the robustness of malware detection models against real-world adversaries.
We then propose Mal2GCN, a robust malware detection model. Mal2GCN uses the representation power of graph convolutional networks combined with the non-negative weights training method to create a malware detection model with high detection accuracy.
arXiv Detail & Related papers (2021-08-27T19:42:13Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.