Mal2GCN: A Robust Malware Detection Approach Using Deep Graph
Convolutional Networks With Non-Negative Weights
- URL: http://arxiv.org/abs/2108.12473v1
- Date: Fri, 27 Aug 2021 19:42:13 GMT
- Title: Mal2GCN: A Robust Malware Detection Approach Using Deep Graph
Convolutional Networks With Non-Negative Weights
- Authors: Omid Kargarnovin, Amir Mahdi Sadeghzadeh, and Rasool Jalili
- Abstract summary: We present a black-box source code-based adversarial malware generation approach that can be used to evaluate the robustness of malware detection models against real-world adversaries.
We then propose Mal2GCN, a robust malware detection model. Mal2GCN uses the representation power of graph convolutional networks combined with the non-negative weights training method to create a malware detection model with high detection accuracy.
- Score: 1.3190581566723918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing pace of using machine learning to solve various problems,
securing these models against adversaries has become one of the main concerns
of researchers. Recent studies have shown that in an adversarial environment,
machine learning models are vulnerable to adversarial examples, and adversaries
can create carefully crafted inputs to fool the models. With the advent of deep
neural networks, many researchers have used deep neural networks for various
tasks, and have achieved impressive results. These models must become robust
against attacks before being deployed safely, especially in security-related
fields such as malware detection. In this paper, we first present a black-box
source code-based adversarial malware generation approach that can be used to
evaluate the robustness of malware detection models against real-world
adversaries. The proposed approach injects adversarial codes into the various
locations of malware source codes to evade malware detection models. We then
propose Mal2GCN, a robust malware detection model. Mal2GCN uses the
representation power of graph convolutional networks combined with the
non-negative weights training method to create a malware detection model with
high detection accuracy, which is also robust against adversarial attacks that
add benign features to the input.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Creating Valid Adversarial Examples of Malware [4.817429789586127]
We present a generator of adversarial malware examples using reinforcement learning algorithms.
Using the PPO algorithm, we achieved an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT) model.
random application of our functionality-preserving portable executable modifications successfully evades leading antivirus engines.
arXiv Detail & Related papers (2023-06-23T16:17:45Z) - FGAM:Fast Adversarial Malware Generation Method Based on Gradient Sign [16.16005518623829]
Adversarial attacks are to deceive the deep learning model by generating adversarial samples.
This paper proposes FGAM (Fast Generate Adversarial Malware), a method for fast generating adversarial malware.
It is experimentally verified that the success rate of the adversarial malware deception model generated by FGAM is increased by about 84% compared with existing methods.
arXiv Detail & Related papers (2023-05-22T06:58:34Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Task-Aware Meta Learning-based Siamese Neural Network for Classifying
Obfuscated Malware [5.293553970082943]
Existing malware detection methods fail to correctly classify different malware families when obfuscated malware samples are present in the training dataset.
We propose a novel task-aware few-shot-learning-based Siamese Neural Network that is resilient against such control flow obfuscation techniques.
Our proposed approach is highly effective in recognizing unique malware signatures, thus correctly classifying malware samples that belong to the same malware family.
arXiv Detail & Related papers (2021-10-26T04:44:13Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.