A two-steps approach to improve the performance of Android malware
detectors
- URL: http://arxiv.org/abs/2205.08265v1
- Date: Tue, 17 May 2022 12:04:17 GMT
- Title: A two-steps approach to improve the performance of Android malware
detectors
- Authors: Nadia Daoudi, Kevin Allix, Tegawend\'e F. Bissyand\'e and Jacques
Klein
- Abstract summary: We propose GUIDED RETRAINING, a supervised representation learning-based method that boosts the performance of a malware detector.
We validate our method on four state-of-the-art Android malware detection approaches using over 265k malware and benign apps.
Our method is generic and designed to enhance the classification performance on a binary classification task.
- Score: 4.440024971751226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The popularity of Android OS has made it an appealing target to malware
developers. To evade detection, including by ML-based techniques, attackers
invest in creating malware that closely resemble legitimate apps. In this
paper, we propose GUIDED RETRAINING, a supervised representation learning-based
method that boosts the performance of a malware detector. First, the dataset is
split into "easy" and "difficult" samples, where difficulty is associated to
the prediction probabilities yielded by a malware detector: for difficult
samples, the probabilities are such that the classifier is not confident on the
predictions, which have high error rates. Then, we apply our GUIDED RETRAINING
method on the difficult samples to improve their classification. For the subset
of "easy" samples, the base malware detector is used to make the final
predictions since the error rate on that subset is low by construction. For the
subset of "difficult" samples, we rely on GUIDED RETRAINING, which leverages
the correct predictions and the errors made by the base malware detector to
guide the retraining process. GUIDED RETRAINING focuses on the difficult
samples: it learns new embeddings of these samples using Supervised Contrastive
Learning and trains an auxiliary classifier for the final predictions. We
validate our method on four state-of-the-art Android malware detection
approaches using over 265k malware and benign apps, and we demonstrate that
GUIDED RETRAINING can reduce up to 40.41% prediction errors made by the malware
detectors. Our method is generic and designed to enhance the classification
performance on a binary classification task. Consequently, it can be applied to
other classification problems beyond Android malware detection.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - On the Robustness of Malware Detectors to Adversarial Samples [4.325757776543199]
Adversarial examples add imperceptible alterations to inputs to induce misclassification in machine learning models.
They have been demonstrated to pose significant challenges in domains like image classification.
Adversarial examples have also been studied in malware analysis.
arXiv Detail & Related papers (2024-08-05T08:41:07Z) - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - MalPurifier: Enhancing Android Malware Detection with Adversarial
Purification against Evasion Attacks [19.68134775248897]
MalPurifier exploits adversarial purification to eliminate perturbations independently, resulting in attack mitigation in a light and flexible way.
Experimental results on two Android malware datasets demonstrate that MalPurifier outperforms the state-of-the-art defenses.
arXiv Detail & Related papers (2023-12-11T14:48:43Z) - A Comparison of Adversarial Learning Techniques for Malware Detection [1.2289361708127875]
We use gradient-based, evolutionary algorithm-based, and reinforcement-based methods to generate adversarial samples.
Experiments show that the Gym-malware generator, which uses a reinforcement learning approach, has the greatest practical potential.
arXiv Detail & Related papers (2023-08-19T09:22:32Z) - Towards a Practical Defense against Adversarial Attacks on Deep
Learning-based Malware Detectors via Randomized Smoothing [3.736916304884177]
We propose a practical defense against adversarial malware examples inspired by randomized smoothing.
In our work, instead of employing Gaussian or Laplace noise when randomizing inputs, we propose a randomized ablation-based smoothing scheme.
We have empirically evaluated the proposed ablation-based model against various state-of-the-art evasion attacks on the BODMAS dataset.
arXiv Detail & Related papers (2023-08-17T10:30:25Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - MERLIN -- Malware Evasion with Reinforcement LearnINg [26.500149465292246]
We propose a method using reinforcement learning with DQN and REINFORCE algorithms to challenge two state-of-the-art malware detection engines.
Our method combines several actions, modifying a Windows portable execution file without breaking its functionalities.
We demonstrate that REINFORCE achieves very good evasion rates even on a commercial AV with limited available information.
arXiv Detail & Related papers (2022-03-24T10:58:47Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.