MaMaDroid2.0 -- The Holes of Control Flow Graphs
- URL: http://arxiv.org/abs/2202.13922v1
- Date: Mon, 28 Feb 2022 16:18:15 GMT
- Title: MaMaDroid2.0 -- The Holes of Control Flow Graphs
- Authors: Harel Berger, Chen Hajaj, Enrico Mariconti, Amit Dvir
- Abstract summary: This paper fully inspects a well-known Android malware detection system, MaMaDroid, which analyzes the control flow graph of the application.
The changes in the ratio between benign and malicious samples have a clear effect on each one of the models, resulting in a decrease of more than 40% in their detection rate.
Three novel attacks that manipulate the CFG and their detection rates are described for each one of the targeted models.
The attacks decrease the detection rate of most of the models to 0%, with regards to different ratios of benign to malicious apps.
- Score: 5.838266102141281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Android malware is a continuously expanding threat to billions of mobile
users around the globe. Detection systems are updated constantly to address
these threats. However, a backlash takes the form of evasion attacks, in which
an adversary changes malicious samples such that those samples will be
misclassified as benign. This paper fully inspects a well-known Android malware
detection system, MaMaDroid, which analyzes the control flow graph of the
application. Changes to the portion of benign samples in the train set and
models are considered to see their effect on the classifier. The changes in the
ratio between benign and malicious samples have a clear effect on each one of
the models, resulting in a decrease of more than 40% in their detection rate.
Moreover, adopted ML models are implemented as well, including 5-NN, Decision
Tree, and Adaboost. Exploration of the six models reveals a typical behavior in
different cases, of tree-based models and distance-based models. Moreover,
three novel attacks that manipulate the CFG and their detection rates are
described for each one of the targeted models. The attacks decrease the
detection rate of most of the models to 0%, with regards to different ratios of
benign to malicious apps. As a result, a new version of MaMaDroid is
engineered. This model fuses the CFG of the app and static analysis of features
of the app. This improved model is proved to be robust against evasion attacks
targeting both CFG-based models and static analysis models, achieving a
detection rate of more than 90% against each one of the attacks.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - MalPurifier: Enhancing Android Malware Detection with Adversarial
Purification against Evasion Attacks [19.68134775248897]
MalPurifier exploits adversarial purification to eliminate perturbations independently, resulting in attack mitigation in a light and flexible way.
Experimental results on two Android malware datasets demonstrate that MalPurifier outperforms the state-of-the-art defenses.
arXiv Detail & Related papers (2023-12-11T14:48:43Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Flexible Android Malware Detection Model based on Generative Adversarial
Networks with Code Tensor [7.417407987122394]
Existing malware detection methods only target at the existing malicious samples.
In this paper, we propose a novel scheme that detects malware and its variants efficiently.
arXiv Detail & Related papers (2022-10-25T03:20:34Z) - Fast & Furious: Modelling Malware Detection as Evolving Data Streams [6.6892028759947175]
Malware is a major threat to computer systems and imposes many challenges to cyber security.
In this work, we evaluate the impact of concept drift on malware classifiers for two Android datasets.
arXiv Detail & Related papers (2022-05-24T18:43:40Z) - Robust Android Malware Detection System against Adversarial Attacks
using Q-Learning [2.179313476241343]
The current state-of-the-art Android malware detection systems are based on machine learning and deep learning models.
We developed eight Android malware detection models based on machine learning and deep neural network and investigated their robustness against adversarial attacks.
We created new variants of malware using Reinforcement Learning, which will be misclassified as benign by the existing Android malware detection models.
arXiv Detail & Related papers (2021-01-27T16:45:57Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.