Adversarial Attacks on Transformers-Based Malware Detectors
- URL: http://arxiv.org/abs/2210.00008v1
- Date: Sat, 1 Oct 2022 22:23:03 GMT
- Title: Adversarial Attacks on Transformers-Based Malware Detectors
- Authors: Yash Jakhotiya, Heramb Patil, Jugal Rawlani
- Abstract summary: Signature-based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature-based detectors.
Our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks.
We train a Transformers-based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23.9% and propose defenses that reduce this misclassification rate to half.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signature-based malware detectors have proven to be insufficient as even a
small change in malignant executable code can bypass these signature-based
detectors. Many machine learning-based models have been proposed to efficiently
detect a wide variety of malware. Many of these models are found to be
susceptible to adversarial attacks - attacks that work by generating
intentionally designed inputs that can force these models to misclassify. Our
work aims to explore vulnerabilities in the current state of the art malware
detectors to adversarial attacks. We train a Transformers-based malware
detector, carry out adversarial attacks resulting in a misclassification rate
of 23.9% and propose defenses that reduce this misclassification rate to half.
An implementation of our work can be found at
https://github.com/yashjakhotiya/Adversarial-Attacks-On-Transformers.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - On the Robustness of Malware Detectors to Adversarial Samples [4.325757776543199]
Adversarial examples add imperceptible alterations to inputs to induce misclassification in machine learning models.
They have been demonstrated to pose significant challenges in domains like image classification.
Adversarial examples have also been studied in malware analysis.
arXiv Detail & Related papers (2024-08-05T08:41:07Z) - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box
Android Malware Detection [2.2811510666857546]
EvadeDroid is a problem-space adversarial attack designed to effectively evade black-box Android malware detectors in real-world scenarios.
We show that EvadeDroid achieves evasion rates of 80%-95% against DREBIN, Sec-SVM, ADE-MA, MaMaDroid, and Opcode-SVM with only 1-9 queries.
arXiv Detail & Related papers (2021-10-07T09:39:40Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.