Creating Valid Adversarial Examples of Malware
- URL: http://arxiv.org/abs/2306.13587v1
- Date: Fri, 23 Jun 2023 16:17:45 GMT
- Title: Creating Valid Adversarial Examples of Malware
- Authors: Matou\v{s} Koz\'ak, Martin Jure\v{c}ek, Mark Stamp, Fabio Di Troia
- Abstract summary: We present a generator of adversarial malware examples using reinforcement learning algorithms.
Using the PPO algorithm, we achieved an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT) model.
random application of our functionality-preserving portable executable modifications successfully evades leading antivirus engines.
- Score: 4.817429789586127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is becoming increasingly popular as a go-to approach for
many tasks due to its world-class results. As a result, antivirus developers
are incorporating machine learning models into their products. While these
models improve malware detection capabilities, they also carry the disadvantage
of being susceptible to adversarial attacks. Although this vulnerability has
been demonstrated for many models in white-box settings, a black-box attack is
more applicable in practice for the domain of malware detection. We present a
generator of adversarial malware examples using reinforcement learning
algorithms. The reinforcement learning agents utilize a set of
functionality-preserving modifications, thus creating valid adversarial
examples. Using the proximal policy optimization (PPO) algorithm, we achieved
an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT)
model. The PPO agent previously trained against the GBDT classifier scored an
evasion rate of 11.41% against the neural network-based classifier MalConv and
an average evasion rate of 2.31% against top antivirus programs. Furthermore,
we discovered that random application of our functionality-preserving portable
executable modifications successfully evades leading antivirus engines, with an
average evasion rate of 11.65%. These findings indicate that machine
learning-based models used in malware detection systems are vulnerable to
adversarial attacks and that better safeguards need to be taken to protect
these systems.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - On the Effectiveness of Adversarial Samples against Ensemble
Learning-based Windows PE Malware Detectors [0.0]
We propose a mutation system to counteract ensemble learning-based detectors by combining GANs and an RL model.
In the FeaGAN model, ensemble learning is utilized to enhance the malware detector's evasion ability, with the generated adversarial patterns.
arXiv Detail & Related papers (2023-09-25T02:57:27Z) - The Power of MEME: Adversarial Malware Creation with Model-Based
Reinforcement Learning [0.7614628596146599]
This work proposes a new algorithm that combines Malware Evasion and Model Extraction attacks.
MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples.
It produces evasive malware with an evasion rate in the range of 32-73%.
arXiv Detail & Related papers (2023-08-31T08:55:27Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Robust Android Malware Detection System against Adversarial Attacks
using Q-Learning [2.179313476241343]
The current state-of-the-art Android malware detection systems are based on machine learning and deep learning models.
We developed eight Android malware detection models based on machine learning and deep neural network and investigated their robustness against adversarial attacks.
We created new variants of malware using Reinforcement Learning, which will be misclassified as benign by the existing Android malware detection models.
arXiv Detail & Related papers (2021-01-27T16:45:57Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.