Malware and Ransomware Detection Models
- URL: http://arxiv.org/abs/2207.02108v1
- Date: Tue, 5 Jul 2022 15:22:13 GMT
- Title: Malware and Ransomware Detection Models
- Authors: Benjamin Marais and Tony Quertier and St\'ephane Morucci
- Abstract summary: We introduce a novel and flexible ransomware detection model that combines two optimized models.
Our detection results on a limited dataset demonstrate good accuracy and F1 scores.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cybercrime is one of the major digital threats of this century. In
particular, ransomware attacks have significantly increased, resulting in
global damage costs of tens of billion dollars. In this paper, we train and
test different Machine Learning and Deep Learning models for malware detection,
malware classification and ransomware detection. We introduce a novel and
flexible ransomware detection model that combines two optimized models. Our
detection results on a limited dataset demonstrate good accuracy and F1 scores.
Related papers
- Detection of ransomware attacks using federated learning based on the CNN model [3.183529890105507]
This paper offers a ransomware attack modeling technique that targets the disrupted operation of a digital substation.
Experiments demonstrate that the suggested technique detects ransomware with a high accuracy rate.
arXiv Detail & Related papers (2024-05-01T09:57:34Z) - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - Ransomware Detection and Classification using Machine Learning [7.573297026523597]
This study uses the XGBoost and Random Forest (RF) algorithms to detect and classify ransomware attacks.
The models are evaluated on a dataset of ransomware attacks and demonstrate their effectiveness in accurately detecting and classifying ransomware.
arXiv Detail & Related papers (2023-11-05T18:16:53Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - New Approach to Malware Detection Using Optimized Convolutional Neural
Network [0.0]
This paper proposes a new convolutional deep learning neural network to accurately and effectively detect malware with high precision.
The baseline model initially achieves 98% accurate rate but after increasing the depth of the CNN model, its accuracy reaches 99.183.
To further solidify the effectiveness of this CNN model, we use the improved model to make predictions on new malware samples within our dataset.
arXiv Detail & Related papers (2023-01-26T15:06:47Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - Self-Supervised Vision Transformers for Malware Detection [0.0]
This paper presents SHERLOCK, a self-supervision based deep learning model to detect malware based on the Vision Transformer (ViT) architecture.
Our proposed model is also able to outperform state-of-the-art techniques for multi-class malware classification of types and family with macro-F1 score of.497 and.491 respectively.
arXiv Detail & Related papers (2022-08-15T07:49:58Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Classifying Malware Images with Convolutional Neural Network Models [2.363388546004777]
In this paper, we use several convolutional neural network (CNN) models for static malware classification.
The Inception V3 model achieves a test accuracy of 99.24%, which is better than the accuracy of 98.52% achieved by the current state-of-the-art system.
arXiv Detail & Related papers (2020-10-30T07:39:30Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.