EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box
Android Malware Detection
- URL: http://arxiv.org/abs/2110.03301v4
- Date: Thu, 25 Jan 2024 13:26:37 GMT
- Title: EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box
Android Malware Detection
- Authors: Hamid Bostani and Veelasha Moonsamy
- Abstract summary: EvadeDroid is a problem-space adversarial attack designed to effectively evade black-box Android malware detectors in real-world scenarios.
We show that EvadeDroid achieves evasion rates of 80%-95% against DREBIN, Sec-SVM, ADE-MA, MaMaDroid, and Opcode-SVM with only 1-9 queries.
- Score: 2.2811510666857546
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Over the last decade, researchers have extensively explored the
vulnerabilities of Android malware detectors to adversarial examples through
the development of evasion attacks; however, the practicality of these attacks
in real-world scenarios remains arguable. The majority of studies have assumed
attackers know the details of the target classifiers used for malware
detection, while in reality, malicious actors have limited access to the target
classifiers. This paper introduces EvadeDroid, a problem-space adversarial
attack designed to effectively evade black-box Android malware detectors in
real-world scenarios. EvadeDroid constructs a collection of problem-space
transformations derived from benign donors that share opcode-level similarity
with malware apps by leveraging an n-gram-based approach. These transformations
are then used to morph malware instances into benign ones via an iterative and
incremental manipulation strategy. The proposed manipulation technique is a
query-efficient optimization algorithm that can find and inject optimal
sequences of transformations into malware apps. Our empirical evaluations,
carried out on 1K malware apps, demonstrate the effectiveness of our approach
in generating real-world adversarial examples in both soft- and hard-label
settings. Our findings reveal that EvadeDroid can effectively deceive diverse
malware detectors that utilize different features with various feature types.
Specifically, EvadeDroid achieves evasion rates of 80%-95% against DREBIN,
Sec-SVM, ADE-MA, MaMaDroid, and Opcode-SVM with only 1-9 queries. Furthermore,
we show that the proposed problem-space adversarial attack is able to preserve
its stealthiness against five popular commercial antiviruses with an average of
79% evasion rate, thus demonstrating its feasibility in the real world.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - MalPurifier: Enhancing Android Malware Detection with Adversarial
Purification against Evasion Attacks [19.68134775248897]
MalPurifier exploits adversarial purification to eliminate perturbations independently, resulting in attack mitigation in a light and flexible way.
Experimental results on two Android malware datasets demonstrate that MalPurifier outperforms the state-of-the-art defenses.
arXiv Detail & Related papers (2023-12-11T14:48:43Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Adversarial Attacks on Transformers-Based Malware Detectors [0.0]
Signature-based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature-based detectors.
Our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks.
We train a Transformers-based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23.9% and propose defenses that reduce this misclassification rate to half.
arXiv Detail & Related papers (2022-10-01T22:23:03Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A
Causal Language Model Approach [5.2424255020469595]
Adversarial Malware example Generation aims to generate evasive malware variants.
Black-box method has gained more attention than white-box methods.
In this study, we show that a novel DL-based causal language model enables single-shot evasion.
arXiv Detail & Related papers (2021-12-03T05:29:50Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.