On the Effectiveness of Adversarial Samples against Ensemble
Learning-based Windows PE Malware Detectors
- URL: http://arxiv.org/abs/2309.13841v1
- Date: Mon, 25 Sep 2023 02:57:27 GMT
- Title: On the Effectiveness of Adversarial Samples against Ensemble
Learning-based Windows PE Malware Detectors
- Authors: Trong-Nghia To, Danh Le Kim, Do Thi Thu Hien, Nghi Hoang Khoa, Hien Do
Hoang, Phan The Duy, and Van-Hau Pham
- Abstract summary: We propose a mutation system to counteract ensemble learning-based detectors by combining GANs and an RL model.
In the FeaGAN model, ensemble learning is utilized to enhance the malware detector's evasion ability, with the generated adversarial patterns.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been a growing focus and interest in applying machine
learning (ML) to the field of cybersecurity, particularly in malware detection
and prevention. Several research works on malware analysis have been proposed,
offering promising results for both academic and practical applications. In
these works, the use of Generative Adversarial Networks (GANs) or Reinforcement
Learning (RL) can aid malware creators in crafting metamorphic malware that
evades antivirus software. In this study, we propose a mutation system to
counteract ensemble learning-based detectors by combining GANs and an RL model,
overcoming the limitations of the MalGAN model. Our proposed FeaGAN model is
built based on MalGAN by incorporating an RL model called the Deep Q-network
anti-malware Engines Attacking Framework (DQEAF). The RL model addresses three
key challenges in performing adversarial attacks on Windows Portable Executable
malware, including format preservation, executability preservation, and
maliciousness preservation. In the FeaGAN model, ensemble learning is utilized
to enhance the malware detector's evasion ability, with the generated
adversarial patterns. The experimental results demonstrate that 100\% of the
selected mutant samples preserve the format of executable files, while certain
successes in both executability preservation and maliciousness preservation are
achieved, reaching a stable success rate.
Related papers
- A Novel Reinforcement Learning Model for Post-Incident Malware Investigations [0.0]
This Research proposes a Novel Reinforcement Learning model to optimise malware forensics investigation during cyber incident response.
It aims to improve forensic investigation efficiency by reducing false negatives and adapting current practices to evolving malware signatures.
arXiv Detail & Related papers (2024-10-19T07:59:10Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - The Power of MEME: Adversarial Malware Creation with Model-Based
Reinforcement Learning [0.7614628596146599]
This work proposes a new algorithm that combines Malware Evasion and Model Extraction attacks.
MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples.
It produces evasive malware with an evasion rate in the range of 32-73%.
arXiv Detail & Related papers (2023-08-31T08:55:27Z) - Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared
Adversarial Examples [67.66153875643964]
Backdoor attacks are serious security threats to machine learning models.
In this paper, we explore the task of purifying a backdoored model using a small clean dataset.
By establishing the connection between backdoor risk and adversarial risk, we derive a novel upper bound for backdoor risk.
arXiv Detail & Related papers (2023-07-20T03:56:04Z) - Creating Valid Adversarial Examples of Malware [4.817429789586127]
We present a generator of adversarial malware examples using reinforcement learning algorithms.
Using the PPO algorithm, we achieved an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT) model.
random application of our functionality-preserving portable executable modifications successfully evades leading antivirus engines.
arXiv Detail & Related papers (2023-06-23T16:17:45Z) - FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign [16.16005518623829]
Adversarial attacks are to deceive the deep learning model by generating adversarial samples.
This paper proposes FGAM (Fast Generate Adversarial Malware), a method for fast generating adversarial malware.
It is experimentally verified that the success rate of the adversarial malware deception model generated by FGAM is increased by about 84% compared with existing methods.
arXiv Detail & Related papers (2023-05-22T06:58:34Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.