Leveraging Uncertainty for Improved Static Malware Detection Under
Extreme False Positive Constraints
- URL: http://arxiv.org/abs/2108.04081v1
- Date: Mon, 9 Aug 2021 14:30:23 GMT
- Title: Leveraging Uncertainty for Improved Static Malware Detection Under
Extreme False Positive Constraints
- Authors: Andre T. Nguyen and Edward Raff and Charles Nicholas and James Holt
- Abstract summary: We show how ensembling and Bayesian treatments of machine learning methods for static malware detection allow for improved identification of model errors.
In particular, we improve the true positive rate (TPR) at an actual realized FPR of 1e-5 from an expected 0.69 for previous methods to 0.80 on the best performing model class on the Sophos industry scale dataset.
- Score: 21.241478970181912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The detection of malware is a critical task for the protection of computing
environments. This task often requires extremely low false positive rates (FPR)
of 0.01% or even lower, for which modern machine learning has no readily
available tools. We introduce the first broad investigation of the use of
uncertainty for malware detection across multiple datasets, models, and feature
types. We show how ensembling and Bayesian treatments of machine learning
methods for static malware detection allow for improved identification of model
errors, uncovering of new malware families, and predictive performance under
extreme false positive constraints. In particular, we improve the true positive
rate (TPR) at an actual realized FPR of 1e-5 from an expected 0.69 for previous
methods to 0.80 on the best performing model class on the Sophos industry scale
dataset. We additionally demonstrate how previous works have used an evaluation
protocol that can lead to misleading results.
Related papers
- Zero-day attack and ransomware detection [0.0]
This study uses the UGRansome dataset to train various Machine Learning models for zero-day and ransomware attacks detection.
The finding demonstrates that Random Forest (RFC), XGBoost, and Ensemble Methods achieved perfect scores in accuracy, precision, recall, and F1-score.
arXiv Detail & Related papers (2024-08-08T02:23:42Z) - PromptSAM+: Malware Detection based on Prompt Segment Anything Model [8.00932560688061]
We propose a visual malware general enhancement classification framework, PromptSAM+', based on a large visual network segmentation model.
Our experimental results indicate that 'PromptSAM+' is effective and efficient in malware detection and classification, achieving high accuracy and low rates of false positives and negatives.
arXiv Detail & Related papers (2024-08-04T15:42:34Z) - Detecting new obfuscated malware variants: A lightweight and interpretable machine learning approach [0.0]
We present a machine learning-based system for detecting obfuscated malware that is highly accurate, lightweight and interpretable.
Our system is capable of detecting 15 malware subtypes despite being exclusively trained on one malware subtype, namely the Transponder from the Spyware family.
The Transponder-focused model exhibited high accuracy, exceeding 99.8%, with an average processing speed of 5.7 microseconds per file.
arXiv Detail & Related papers (2024-07-07T12:41:40Z) - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - Creating Valid Adversarial Examples of Malware [4.817429789586127]
We present a generator of adversarial malware examples using reinforcement learning algorithms.
Using the PPO algorithm, we achieved an evasion rate of 53.84% against the gradient-boosted decision tree (GBDT) model.
random application of our functionality-preserving portable executable modifications successfully evades leading antivirus engines.
arXiv Detail & Related papers (2023-06-23T16:17:45Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.