ATWM: Defense against adversarial malware based on adversarial training
- URL: http://arxiv.org/abs/2307.05095v1
- Date: Tue, 11 Jul 2023 08:07:10 GMT
- Title: ATWM: Defense against adversarial malware based on adversarial training
- Authors: Kun Li and Fan Zhang and Wei Guo
- Abstract summary: Deep learning models are vulnerable to adversarial example attacks.
This paper proposes an adversarial malware defense method based on adversarial training.
The results show that the method in this paper can improve the adversarial defense capability of the model without reducing the accuracy of the model.
- Score: 16.16005518623829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning technology has made great achievements in the field of image.
In order to defend against malware attacks, researchers have proposed many
Windows malware detection models based on deep learning. However, deep learning
models are vulnerable to adversarial example attacks. Malware can generate
adversarial malware with the same malicious function to attack the malware
detection model and evade detection of the model. Currently, many adversarial
defense studies have been proposed, but existing adversarial defense studies
are based on image sample and cannot be directly applied to malware sample.
Therefore, this paper proposes an adversarial malware defense method based on
adversarial training. This method uses preprocessing to defend simple
adversarial examples to reduce the difficulty of adversarial training.
Moreover, this method improves the adversarial defense capability of the model
through adversarial training. We experimented with three attack methods in two
sets of datasets, and the results show that the method in this paper can
improve the adversarial defense capability of the model without reducing the
accuracy of the model.
Related papers
- Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - The Power of MEME: Adversarial Malware Creation with Model-Based
Reinforcement Learning [0.7614628596146599]
This work proposes a new algorithm that combines Malware Evasion and Model Extraction attacks.
MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples.
It produces evasive malware with an evasion rate in the range of 32-73%.
arXiv Detail & Related papers (2023-08-31T08:55:27Z) - FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign [16.16005518623829]
Adversarial attacks are to deceive the deep learning model by generating adversarial samples.
This paper proposes FGAM (Fast Generate Adversarial Malware), a method for fast generating adversarial malware.
It is experimentally verified that the success rate of the adversarial malware deception model generated by FGAM is increased by about 84% compared with existing methods.
arXiv Detail & Related papers (2023-05-22T06:58:34Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Backdoor Attack against NLP models with Robustness-Aware Perturbation
defense [0.0]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
In our work, we break this defense by controlling the robustness gap between poisoned and clean samples using adversarial training step.
arXiv Detail & Related papers (2022-04-08T10:08:07Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - What Doesn't Kill You Makes You Robust(er): Adversarial Training against
Poisons and Backdoors [57.040948169155925]
We extend the adversarial training framework to defend against (training-time) poisoning and backdoor attacks.
Our method desensitizes networks to the effects of poisoning by creating poisons during training and injecting them into training batches.
We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
arXiv Detail & Related papers (2021-02-26T17:54:36Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z) - Single-step Adversarial training with Dropout Scheduling [59.50324605982158]
We show that models trained using single-step adversarial training method learn to prevent the generation of single-step adversaries.
Models trained using proposed single-step adversarial training method are robust against both single-step and multi-step adversarial attacks.
arXiv Detail & Related papers (2020-04-18T14:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.