Flexible Android Malware Detection Model based on Generative Adversarial
Networks with Code Tensor
- URL: http://arxiv.org/abs/2210.14225v1
- Date: Tue, 25 Oct 2022 03:20:34 GMT
- Title: Flexible Android Malware Detection Model based on Generative Adversarial
Networks with Code Tensor
- Authors: Zhao Yang, Fengyang Deng, Linxi Han
- Abstract summary: Existing malware detection methods only target at the existing malicious samples.
In this paper, we propose a novel scheme that detects malware and its variants efficiently.
- Score: 7.417407987122394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The behavior of malware threats is gradually increasing, heightened the need
for malware detection. However, existing malware detection methods only target
at the existing malicious samples, the detection of fresh malicious code and
variants of malicious code is limited. In this paper, we propose a novel scheme
that detects malware and its variants efficiently. Based on the idea of the
generative adversarial networks (GANs), we obtain the `true' sample
distribution that satisfies the characteristics of the real malware, use them
to deceive the discriminator, thus achieve the defense against malicious code
attacks and improve malware detection. Firstly, a new Android malware APK to
image texture feature extraction segmentation method is proposed, which is
called segment self-growing texture segmentation algorithm. Secondly, tensor
singular value decomposition (tSVD) based on the low-tubal rank transforms
malicious features with different sizes into a fixed third-order tensor
uniformly, which is entered into the neural network for training and learning.
Finally, a flexible Android malware detection model based on GANs with code
tensor (MTFD-GANs) is proposed. Experiments show that the proposed model can
generally surpass the traditional malware detection model, with a maximum
improvement efficiency of 41.6\%. At the same time, the newly generated samples
of the GANs generator greatly enrich the sample diversity. And retraining
malware detector can effectively improve the detection efficiency and
robustness of traditional models.
Related papers
- MalPurifier: Enhancing Android Malware Detection with Adversarial
Purification against Evasion Attacks [19.68134775248897]
MalPurifier exploits adversarial purification to eliminate perturbations independently, resulting in attack mitigation in a light and flexible way.
Experimental results on two Android malware datasets demonstrate that MalPurifier outperforms the state-of-the-art defenses.
arXiv Detail & Related papers (2023-12-11T14:48:43Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Task-Aware Meta Learning-based Siamese Neural Network for Classifying
Obfuscated Malware [5.293553970082943]
Existing malware detection methods fail to correctly classify different malware families when obfuscated malware samples are present in the training dataset.
We propose a novel task-aware few-shot-learning-based Siamese Neural Network that is resilient against such control flow obfuscation techniques.
Our proposed approach is highly effective in recognizing unique malware signatures, thus correctly classifying malware samples that belong to the same malware family.
arXiv Detail & Related papers (2021-10-26T04:44:13Z) - Mal2GCN: A Robust Malware Detection Approach Using Deep Graph
Convolutional Networks With Non-Negative Weights [1.3190581566723918]
We present a black-box source code-based adversarial malware generation approach that can be used to evaluate the robustness of malware detection models against real-world adversaries.
We then propose Mal2GCN, a robust malware detection model. Mal2GCN uses the representation power of graph convolutional networks combined with the non-negative weights training method to create a malware detection model with high detection accuracy.
arXiv Detail & Related papers (2021-08-27T19:42:13Z) - Detection of Malicious Android Applications: Classical Machine Learning
vs. Deep Neural Network Integrated with Clustering [2.179313476241343]
Traditional malware detection mechanisms are not able to cope-up with next-generation malware attacks.
We propose effective and efficient Android malware detection models based on machine learning and deep learning integrated with clustering.
arXiv Detail & Related papers (2021-02-28T21:50:57Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.