The Power of MEME: Adversarial Malware Creation with Model-Based
Reinforcement Learning
- URL: http://arxiv.org/abs/2308.16562v1
- Date: Thu, 31 Aug 2023 08:55:27 GMT
- Title: The Power of MEME: Adversarial Malware Creation with Model-Based
Reinforcement Learning
- Authors: Maria Rigaki, Sebastian Garcia
- Abstract summary: This work proposes a new algorithm that combines Malware Evasion and Model Extraction attacks.
MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples.
It produces evasive malware with an evasion rate in the range of 32-73%.
- Score: 0.7614628596146599
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to the proliferation of malware, defenders are increasingly turning to
automation and machine learning as part of the malware detection tool-chain.
However, machine learning models are susceptible to adversarial attacks,
requiring the testing of model and product robustness. Meanwhile, attackers
also seek to automate malware generation and evasion of antivirus systems, and
defenders try to gain insight into their methods. This work proposes a new
algorithm that combines Malware Evasion and Model Extraction (MEME) attacks.
MEME uses model-based reinforcement learning to adversarially modify Windows
executable binary samples while simultaneously training a surrogate model with
a high agreement with the target model to evade. To evaluate this method, we
compare it with two state-of-the-art attacks in adversarial malware creation,
using three well-known published models and one antivirus product as targets.
Results show that MEME outperforms the state-of-the-art methods in terms of
evasion capabilities in almost all cases, producing evasive malware with an
evasion rate in the range of 32-73%. It also produces surrogate models with a
prediction label agreement with the respective target models between 97-99%.
The surrogate could be used to fine-tune and improve the evasion rate in the
future.
Related papers
- Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - On the Effectiveness of Adversarial Samples against Ensemble
Learning-based Windows PE Malware Detectors [0.0]
We propose a mutation system to counteract ensemble learning-based detectors by combining GANs and an RL model.
In the FeaGAN model, ensemble learning is utilized to enhance the malware detector's evasion ability, with the generated adversarial patterns.
arXiv Detail & Related papers (2023-09-25T02:57:27Z) - Creating Valid Adversarial Examples of Malware [4.817429789586127]
We present a generator of adversarial malware examples using reinforcement learning algorithms.
Using the PPO algorithm, we achieved an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT) model.
random application of our functionality-preserving portable executable modifications successfully evades leading antivirus engines.
arXiv Detail & Related papers (2023-06-23T16:17:45Z) - FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign [16.16005518623829]
Adversarial attacks are to deceive the deep learning model by generating adversarial samples.
This paper proposes FGAM (Fast Generate Adversarial Malware), a method for fast generating adversarial malware.
It is experimentally verified that the success rate of the adversarial malware deception model generated by FGAM is increased by about 84% compared with existing methods.
arXiv Detail & Related papers (2023-05-22T06:58:34Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Stealing and Evading Malware Classifiers and Antivirus at Low False
Positive Conditions [2.1320960069210475]
The study proposes a new neural network architecture for surrogate models (dualFFNN) and a new model stealing attack that combines transfer and active learning for surrogate creation (FFNN-TL)
The study uses the best surrogates to generate adversarial malware to evade the target models, both stand-alone and AVs (with and without an internet connection)
Results show that surrogate models can generate adversarial malware that evades the targets but with a lower success rate than directly using the target models to generate adversarial malware.
arXiv Detail & Related papers (2022-04-13T08:20:41Z) - Training Meta-Surrogate Model for Transferable Adversarial Attack [98.13178217557193]
We consider adversarial attacks to a black-box model when no queries are allowed.
In this setting, many methods directly attack surrogate models and transfer the obtained adversarial examples to fool the target model.
We show we can obtain a Meta-Surrogate Model (MSM) such that attacks to this model can be easier transferred to other models.
arXiv Detail & Related papers (2021-09-05T03:27:46Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.