A Comparison of State-of-the-Art Techniques for Generating Adversarial
Malware Binaries
- URL: http://arxiv.org/abs/2111.11487v1
- Date: Mon, 22 Nov 2021 19:26:33 GMT
- Title: A Comparison of State-of-the-Art Techniques for Generating Adversarial
Malware Binaries
- Authors: Prithviraj Dasgupta and Zachariah Osman
- Abstract summary: We evaluate three recent adversarial malware generation techniques using binary malware samples drawn from a single, publicly available malware data set.
Our results show that among the compared techniques, the most effective technique is the one that strategically modifies bytes in a binary's header.
- Score: 2.0559497209595814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of generating adversarial malware by a cyber-attacker
where the attacker's task is to strategically modify certain bytes within
existing binary malware files, so that the modified files are able to evade a
malware detector such as machine learning-based malware classifier. We have
evaluated three recent adversarial malware generation techniques using binary
malware samples drawn from a single, publicly available malware data set and
compared their performances for evading a machine-learning based malware
classifier called MalConv. Our results show that among the compared techniques,
the most effective technique is the one that strategically modifies bytes in a
binary's header. We conclude by discussing the lessons learned and future
research directions on the topic of adversarial malware generation.
Related papers
- Obfuscated Memory Malware Detection [2.0618817976970103]
We show how Artificial Intelligence and Machine learning can be used to detect and mitigate these cyber-attacks induced by malware in specific obfuscated malware.
We propose a multi-class classification model to detect the three types of obfuscated malware with an accuracy of 89.07% using the Classic Random Forest algorithm.
arXiv Detail & Related papers (2024-08-23T06:39:15Z) - EMBERSim: A Large-Scale Databank for Boosting Similarity Search in
Malware Analysis [48.5877840394508]
In recent years there has been a shift from quantifications-based malware detection towards machine learning.
We propose to address the deficiencies in the space of similarity research on binary files, starting from EMBER.
We enhance EMBER with similarity information as well as malware class tags, to enable further research in the similarity space.
arXiv Detail & Related papers (2023-10-03T06:58:45Z) - Review of Deep Learning-based Malware Detection for Android and Windows
System [2.855485723554975]
Most of the recent malware families are Artificial Intelligence (AI) enable and can deceive traditional anti-malware systems using different obfuscation techniques.
In this study we review two AI-enabled techniques for detecting malware in Windows and Android operating system, respectively.
arXiv Detail & Related papers (2023-07-04T06:02:04Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Multi-view Representation Learning from Malware to Defend Against
Adversarial Variants [11.45498656419419]
We propose Adversarially Robust Multiview Malware Defense (ARMD), a novel multi-view learning framework to improve the robustness of DL-based malware detectors against adversarial variants.
Our experiments on three renowned open-source deep learning-based malware detectors across six common malware categories show that ARMD is able to improve the adversarial robustness by up to seven times on these malware detectors.
arXiv Detail & Related papers (2022-10-25T22:25:50Z) - Adversarial Attacks against Windows PE Malware Detection: A Survey of
the State-of-the-Art [44.975088044180374]
This paper focuses on malware with the file format of portable executable (PE) in the family of Windows operating systems, namely Windows PE malware.
We first outline the general learning framework of Windows PE malware detection based on ML/DL.
We then highlight three unique challenges of performing adversarial attacks in the context of PE malware.
arXiv Detail & Related papers (2021-12-23T02:12:43Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery [23.294653273180472]
We show how a malicious actor trains a surrogate model to discover binary mutations that cause an instance to be misclassified.
Then, mutated malware is sent to the victim model that takes the place of an antivirus API to test whether it can evade detection.
arXiv Detail & Related papers (2021-06-15T03:31:02Z) - A Novel Malware Detection Mechanism based on Features Extracted from
Converted Malware Binary Images [0.22843885788439805]
We use malware binary images and then extract different features from the same and then employ different ML-classifiers on the dataset thus obtained.
We show that this technique is successful in differentiating classes of malware based on the features extracted.
arXiv Detail & Related papers (2021-04-14T06:55:52Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.