Towards a Resilient Machine Learning Classifier -- a Case Study of
Ransomware Detection
- URL: http://arxiv.org/abs/2003.06428v1
- Date: Fri, 13 Mar 2020 18:02:19 GMT
- Title: Towards a Resilient Machine Learning Classifier -- a Case Study of
Ransomware Detection
- Authors: Chih-Yuan Yang and Ravi Sahita
- Abstract summary: A machine learning (ML) classifier was built to detect ransomware (called crypto-ransomware)
We find that input/output activities of ransomware and the file-content entropy are unique traits to detect crypto-ransomware.
In addition to accuracy and resiliency, trustworthiness is the other key criteria for a quality detector.
- Score: 5.560986338397972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The damage caused by crypto-ransomware, due to encryption, is difficult to
revert and cause data losses. In this paper, a machine learning (ML) classifier
was built to early detect ransomware (called crypto-ransomware) that uses
cryptography by program behavior. If a signature-based detection was missed, a
behavior-based detector can be the last line of defense to detect and contain
the damages. We find that input/output activities of ransomware and the
file-content entropy are unique traits to detect crypto-ransomware. A
deep-learning (DL) classifier can detect ransomware with a high accuracy and a
low false positive rate. We conduct an adversarial research against the models
generated. We use simulated ransomware programs to launch a gray-box analysis
to probe the weakness of ML classifiers and to improve model robustness. In
addition to accuracy and resiliency, trustworthiness is the other key criteria
for a quality detector. Making sure that the correct information was used for
inference is important for a security application. The Integrated Gradient
method was used to explain the deep learning model and also to reveal why false
negatives evade the detection. The approaches to build and to evaluate a
real-world detector were demonstrated and discussed.
Related papers
- Detection of ransomware attacks using federated learning based on the CNN model [3.183529890105507]
This paper offers a ransomware attack modeling technique that targets the disrupted operation of a digital substation.
Experiments demonstrate that the suggested technique detects ransomware with a high accuracy rate.
arXiv Detail & Related papers (2024-05-01T09:57:34Z) - Ransomware Detection and Classification using Machine Learning [7.573297026523597]
This study uses the XGBoost and Random Forest (RF) algorithms to detect and classify ransomware attacks.
The models are evaluated on a dataset of ransomware attacks and demonstrate their effectiveness in accurately detecting and classifying ransomware.
arXiv Detail & Related papers (2023-11-05T18:16:53Z) - Towards a Practical Defense against Adversarial Attacks on Deep
Learning-based Malware Detectors via Randomized Smoothing [3.736916304884177]
We propose a practical defense against adversarial malware examples inspired by randomized smoothing.
In our work, instead of employing Gaussian or Laplace noise when randomizing inputs, we propose a randomized ablation-based smoothing scheme.
We have empirically evaluated the proposed ablation-based model against various state-of-the-art evasion attacks on the BODMAS dataset.
arXiv Detail & Related papers (2023-08-17T10:30:25Z) - RansomAI: AI-powered Ransomware for Stealthy Encryption [0.5172201569251684]
RansomAI is a framework that learns the best encryption algorithm, rate, and duration that minimizes its detection.
It evades the detection of Ransomware-PoC affecting the Raspberry Pi 4 in a few minutes with >90% accuracy.
arXiv Detail & Related papers (2023-06-27T15:36:12Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - MDEA: Malware Detection with Evolutionary Adversarial Learning [16.8615211682877]
MDEA, an Adversarial Malware Detection model uses evolutionary optimization to create attack samples to make the network robust against evasion attacks.
By retraining the model with the evolved malware samples, its performance improves a significant margin.
arXiv Detail & Related papers (2020-02-09T09:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.