Adversarial Attacks against Windows PE Malware Detection: A Survey of
the State-of-the-Art
- URL: http://arxiv.org/abs/2112.12310v1
- Date: Thu, 23 Dec 2021 02:12:43 GMT
- Title: Adversarial Attacks against Windows PE Malware Detection: A Survey of
the State-of-the-Art
- Authors: Xiang Ling, Lingfei Wu, Jiangyu Zhang, Zhenqing Qu, Wei Deng, Xiang
Chen, Chunming Wu, Shouling Ji, Tianyue Luo, Jingzheng Wu, Yanjun Wu
- Abstract summary: This paper focuses on malware with the file format of portable executable (PE) in the family of Windows operating systems, namely Windows PE malware.
We first outline the general learning framework of Windows PE malware detection based on ML/DL.
We then highlight three unique challenges of performing adversarial attacks in the context of PE malware.
- Score: 44.975088044180374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The malware has been being one of the most damaging threats to computers that
span across multiple operating systems and various file formats. To defend
against the ever-increasing and ever-evolving threats of malware, tremendous
efforts have been made to propose a variety of malware detection methods that
attempt to effectively and efficiently detect malware. Recent studies have
shown that, on the one hand, existing ML and DL enable the superior detection
of newly emerging and previously unseen malware. However, on the other hand, ML
and DL models are inherently vulnerable to adversarial attacks in the form of
adversarial examples, which are maliciously generated by slightly and carefully
perturbing the legitimate inputs to confuse the targeted models. Basically,
adversarial attacks are initially extensively studied in the domain of computer
vision, and some quickly expanded to other domains, including NLP, speech
recognition and even malware detection. In this paper, we focus on malware with
the file format of portable executable (PE) in the family of Windows operating
systems, namely Windows PE malware, as a representative case to study the
adversarial attack methods in such adversarial settings. To be specific, we
start by first outlining the general learning framework of Windows PE malware
detection based on ML/DL and subsequently highlighting three unique challenges
of performing adversarial attacks in the context of PE malware. We then conduct
a comprehensive and systematic review to categorize the state-of-the-art
adversarial attacks against PE malware detection, as well as corresponding
defenses to increase the robustness of PE malware detection. We conclude the
paper by first presenting other related attacks against Windows PE malware
detection beyond the adversarial attacks and then shedding light on future
research directions and opportunities.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Review of Deep Learning-based Malware Detection for Android and Windows
System [2.855485723554975]
Most of the recent malware families are Artificial Intelligence (AI) enable and can deceive traditional anti-malware systems using different obfuscation techniques.
In this study we review two AI-enabled techniques for detecting malware in Windows and Android operating system, respectively.
arXiv Detail & Related papers (2023-07-04T06:02:04Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Multi-view Representation Learning from Malware to Defend Against
Adversarial Variants [11.45498656419419]
We propose Adversarially Robust Multiview Malware Defense (ARMD), a novel multi-view learning framework to improve the robustness of DL-based malware detectors against adversarial variants.
Our experiments on three renowned open-source deep learning-based malware detectors across six common malware categories show that ARMD is able to improve the adversarial robustness by up to seven times on these malware detectors.
arXiv Detail & Related papers (2022-10-25T22:25:50Z) - Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A
Causal Language Model Approach [5.2424255020469595]
Adversarial Malware example Generation aims to generate evasive malware variants.
Black-box method has gained more attention than white-box methods.
In this study, we show that a novel DL-based causal language model enables single-shot evasion.
arXiv Detail & Related papers (2021-12-03T05:29:50Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - Binary Black-box Evasion Attacks Against Deep Learning-based Static
Malware Detectors with Adversarial Byte-Level Language Model [11.701290164823142]
MalRNN is a novel approach to automatically generate evasive malware variants without restrictions.
MalRNN effectively evades three recent deep learning-based malware detectors and outperforms current benchmark methods.
arXiv Detail & Related papers (2020-12-14T22:54:53Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.