New Approach to Malware Detection Using Optimized Convolutional Neural
Network
- URL: http://arxiv.org/abs/2301.11161v1
- Date: Thu, 26 Jan 2023 15:06:47 GMT
- Title: New Approach to Malware Detection Using Optimized Convolutional Neural
Network
- Authors: Marwan Omar
- Abstract summary: This paper proposes a new convolutional deep learning neural network to accurately and effectively detect malware with high precision.
The baseline model initially achieves 98% accurate rate but after increasing the depth of the CNN model, its accuracy reaches 99.183.
To further solidify the effectiveness of this CNN model, we use the improved model to make predictions on new malware samples within our dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cyber-crimes have become a multi-billion-dollar industry in the recent years.
Most cybercrimes/attacks involve deploying some type of malware. Malware that
viciously targets every industry, every sector, every enterprise and even
individuals has shown its capabilities to take entire business organizations
offline and cause significant financial damage in billions of dollars annually.
Malware authors are constantly evolving in their attack strategies and
sophistication and are developing malware that is difficult to detect and can
lay dormant in the background for quite some time in order to evade security
controls. Given the above argument, Traditional approaches to malware detection
are no longer effective. As a result, deep learning models have become an
emerging trend to detect and classify malware. This paper proposes a new
convolutional deep learning neural network to accurately and effectively detect
malware with high precision. This paper is different than most other papers in
the literature in that it uses an expert data science approach by developing a
convolutional neural network from scratch to establish a baseline of the
performance model first, explores and implements an improvement model from the
baseline model, and finally it evaluates the performance of the final model.
The baseline model initially achieves 98% accurate rate but after increasing
the depth of the CNN model, its accuracy reaches 99.183 which outperforms most
of the CNN models in the literature. Finally, to further solidify the
effectiveness of this CNN model, we use the improved model to make predictions
on new malware samples within our dataset.
Related papers
- Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - Malware and Ransomware Detection Models [0.0]
We introduce a novel and flexible ransomware detection model that combines two optimized models.
Our detection results on a limited dataset demonstrate good accuracy and F1 scores.
arXiv Detail & Related papers (2022-07-05T15:22:13Z) - EvilModel 2.0: Hiding Malware Inside of Neural Network Models [7.060465882091837]
Turning neural network models into stegomalware is a malicious use of AI.
Existing methods have a low malware embedding rate and a high impact on the model performance.
This paper proposes new methods to embed malware in models with high capacity and no service quality degradation.
arXiv Detail & Related papers (2021-09-09T15:31:33Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Classifying Malware Images with Convolutional Neural Network Models [2.363388546004777]
In this paper, we use several convolutional neural network (CNN) models for static malware classification.
The Inception V3 model achieves a test accuracy of 99.24%, which is better than the accuracy of 98.52% achieved by the current state-of-the-art system.
arXiv Detail & Related papers (2020-10-30T07:39:30Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Exploring Optimal Deep Learning Models for Image-based Malware Variant
Classification [3.8073142980733]
We study the impact of differences in deep learning models and the degree of transfer learning on the classification accuracy of malware variants.
We found that the highest classification accuracy was obtained by fine-tuning one of the latest deep learning models with a relatively low degree of transfer learning.
arXiv Detail & Related papers (2020-04-10T23:45:54Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.