GANG-MAM: GAN based enGine for Modifying Android Malware
- URL: http://arxiv.org/abs/2109.13297v1
- Date: Mon, 27 Sep 2021 18:36:20 GMT
- Title: GANG-MAM: GAN based enGine for Modifying Android Malware
- Authors: Renjith G, Sonia Laudanna, Aji S, Corrado Aaron Visaggio, Vinod P
- Abstract summary: Malware detectors based on machine learning are vulnerable to adversarial attacks.
We propose a system that produces a feature vector for making an Android malware strongly evasive and then modify the malicious program accordingly.
- Score: 1.6799377888527687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malware detectors based on machine learning are vulnerable to adversarial
attacks. Generative Adversarial Networks (GAN) are architectures based on
Neural Networks that could produce successful adversarial samples. The interest
towards this technology is quickly growing. In this paper, we propose a system
that produces a feature vector for making an Android malware strongly evasive
and then modify the malicious program accordingly. Such a system could have a
twofold contribution: it could be used to generate datasets to validate systems
for detecting GAN-based malware and to enlarge the training and testing dataset
for making more robust malware classifiers.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Design of secure and robust cognitive system for malware detection [0.571097144710995]
Adversarial samples are generated by intelligently crafting and adding perturbations to the input samples.
The aim of this thesis is to address the critical system security issues.
A novel technique to detect stealthy malware is proposed.
arXiv Detail & Related papers (2022-08-03T18:52:38Z) - Adversarial Patterns: Building Robust Android Malware Classifiers [0.9208007322096533]
In the field of cybersecurity, machine learning models have made significant improvements in malware detection.
Despite their ability to understand complex patterns from unstructured data, these models are susceptible to adversarial attacks.
This paper provides a comprehensive review of adversarial machine learning in the context of Android malware classifiers.
arXiv Detail & Related papers (2022-03-04T03:47:08Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - A Survey of Machine Learning Algorithms for Detecting Malware in IoT
Firmware [0.0]
This paper employs a number of machine learning algorithms to classify IoT firmware and the best performing models are reported.
Deep learning approaches including Convolutional and Fully Connected Neural Networks are also explored.
arXiv Detail & Related papers (2021-11-03T17:55:51Z) - Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery [23.294653273180472]
We show how a malicious actor trains a surrogate model to discover binary mutations that cause an instance to be misclassified.
Then, mutated malware is sent to the victim model that takes the place of an antivirus API to test whether it can evade detection.
arXiv Detail & Related papers (2021-06-15T03:31:02Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.