Robust Android Malware Detection System against Adversarial Attacks
using Q-Learning
- URL: http://arxiv.org/abs/2101.12031v1
- Date: Wed, 27 Jan 2021 16:45:57 GMT
- Title: Robust Android Malware Detection System against Adversarial Attacks
using Q-Learning
- Authors: Hemant Rathore and Sanjay K. Sahay and Piyush Nikam and Mohit Sewak
- Abstract summary: The current state-of-the-art Android malware detection systems are based on machine learning and deep learning models.
We developed eight Android malware detection models based on machine learning and deep neural network and investigated their robustness against adversarial attacks.
We created new variants of malware using Reinforcement Learning, which will be misclassified as benign by the existing Android malware detection models.
- Score: 2.179313476241343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current state-of-the-art Android malware detection systems are based on
machine learning and deep learning models. Despite having superior performance,
these models are susceptible to adversarial attacks. Therefore in this paper,
we developed eight Android malware detection models based on machine learning
and deep neural network and investigated their robustness against adversarial
attacks. For this purpose, we created new variants of malware using
Reinforcement Learning, which will be misclassified as benign by the existing
Android malware detection models. We propose two novel attack strategies,
namely single policy attack and multiple policy attack using reinforcement
learning for white-box and grey-box scenario respectively. Putting ourselves in
the adversary's shoes, we designed adversarial attacks on the detection models
with the goal of maximizing fooling rate, while making minimum modifications to
the Android application and ensuring that the app's functionality and behavior
do not change. We achieved an average fooling rate of 44.21% and 53.20% across
all the eight detection models with a maximum of five modifications using a
single policy attack and multiple policy attack, respectively. The highest
fooling rate of 86.09% with five changes was attained against the decision
tree-based model using the multiple policy approach. Finally, we propose an
adversarial defense strategy that reduces the average fooling rate by threefold
to 15.22% against a single policy attack, thereby increasing the robustness of
the detection models i.e. the proposed model can effectively detect variants
(metamorphic) of malware. The experimental analysis shows that our proposed
Android malware detection system using reinforcement learning is more robust
against adversarial attacks.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Creating Valid Adversarial Examples of Malware [4.817429789586127]
We present a generator of adversarial malware examples using reinforcement learning algorithms.
Using the PPO algorithm, we achieved an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT) model.
random application of our functionality-preserving portable executable modifications successfully evades leading antivirus engines.
arXiv Detail & Related papers (2023-06-23T16:17:45Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - MultiRobustBench: Benchmarking Robustness Against Multiple Attacks [86.70417016955459]
We present the first unified framework for considering multiple attacks against machine learning (ML) models.
Our framework is able to model different levels of learner's knowledge about the test-time adversary.
We evaluate the performance of 16 defended models for robustness against a set of 9 different attack types.
arXiv Detail & Related papers (2023-02-21T20:26:39Z) - Adversarial Patterns: Building Robust Android Malware Classifiers [0.9208007322096533]
In the field of cybersecurity, machine learning models have made significant improvements in malware detection.
Despite their ability to understand complex patterns from unstructured data, these models are susceptible to adversarial attacks.
This paper provides a comprehensive review of adversarial machine learning in the context of Android malware classifiers.
arXiv Detail & Related papers (2022-03-04T03:47:08Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
Detection [8.551227913472632]
We propose a new attack approach, named mixture of attacks, to perturb a malware example without ruining its malicious functionality.
This naturally leads to a new instantiation of adversarial training, which is further geared to enhancing the ensemble of deep neural networks.
We evaluate defenses using Android malware detectors against 26 different attacks upon two practical datasets.
arXiv Detail & Related papers (2020-06-30T05:56:33Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.