On the Robustness of Malware Detectors to Adversarial Samples
- URL: http://arxiv.org/abs/2408.02310v1
- Date: Mon, 5 Aug 2024 08:41:07 GMT
- Title: On the Robustness of Malware Detectors to Adversarial Samples
- Authors: Muhammad Salman, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Muhammad Ikram, Sidharth Kaushik, Mohamed Ali Kaafar,
- Abstract summary: Adversarial examples add imperceptible alterations to inputs to induce misclassification in machine learning models.
They have been demonstrated to pose significant challenges in domains like image classification.
Adversarial examples have also been studied in malware analysis.
- Score: 4.325757776543199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial examples add imperceptible alterations to inputs with the objective to induce misclassification in machine learning models. They have been demonstrated to pose significant challenges in domains like image classification, with results showing that an adversarially perturbed image to evade detection against one classifier is most likely transferable to other classifiers. Adversarial examples have also been studied in malware analysis. Unlike images, program binaries cannot be arbitrarily perturbed without rendering them non-functional. Due to the difficulty of crafting adversarial program binaries, there is no consensus on the transferability of adversarially perturbed programs to different detectors. In this work, we explore the robustness of malware detectors against adversarially perturbed malware. We investigate the transferability of adversarial attacks developed against one detector, against other machine learning-based malware detectors, and code similarity techniques, specifically, locality sensitive hashing-based detectors. Our analysis reveals that adversarial program binaries crafted for one detector are generally less effective against others. We also evaluate an ensemble of detectors and show that they can potentially mitigate the impact of adversarial program binaries. Finally, we demonstrate that substantial program changes made to evade detection may result in the transformation technique being identified, implying that the adversary must make minimal changes to the program binary.
Related papers
- Towards a Practical Defense against Adversarial Attacks on Deep
Learning-based Malware Detectors via Randomized Smoothing [3.736916304884177]
We propose a practical defense against adversarial malware examples inspired by randomized smoothing.
In our work, instead of employing Gaussian or Laplace noise when randomizing inputs, we propose a randomized ablation-based smoothing scheme.
We have empirically evaluated the proposed ablation-based model against various state-of-the-art evasion attacks on the BODMAS dataset.
arXiv Detail & Related papers (2023-08-17T10:30:25Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Adversarial Attacks on Transformers-Based Malware Detectors [0.0]
Signature-based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature-based detectors.
Our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks.
We train a Transformers-based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23.9% and propose defenses that reduce this misclassification rate to half.
arXiv Detail & Related papers (2022-10-01T22:23:03Z) - A two-steps approach to improve the performance of Android malware
detectors [4.440024971751226]
We propose GUIDED RETRAINING, a supervised representation learning-based method that boosts the performance of a malware detector.
We validate our method on four state-of-the-art Android malware detection approaches using over 265k malware and benign apps.
Our method is generic and designed to enhance the classification performance on a binary classification task.
arXiv Detail & Related papers (2022-05-17T12:04:17Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Can We Leverage Predictive Uncertainty to Detect Dataset Shift and
Adversarial Examples in Android Malware Detection? [20.96638126913256]
We re-design and build 24 Android malware detectors by transforming four off-the-shelf detectors with six calibration methods.
We quantify their uncertainties with nine metrics, including three metrics dealing with data imbalance.
It is an open problem to quantify the uncertainty associated with the predicted labels of adversarial examples.
arXiv Detail & Related papers (2021-09-20T16:16:25Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - A survey on practical adversarial examples for malware classifiers [0.17767466724342065]
Deep neural networks have been found to be vulnerable to adversarial examples.
This vulnerability can be exploited to create evasive malware samples.
We review practical attacks against malware classifiers that generate adversarial malware examples.
arXiv Detail & Related papers (2020-11-06T17:07:34Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.