DRLDO: A novel DRL based De-ObfuscationSystem for Defense against
Metamorphic Malware
- URL: http://arxiv.org/abs/2102.00898v1
- Date: Mon, 1 Feb 2021 15:16:18 GMT
- Title: DRLDO: A novel DRL based De-ObfuscationSystem for Defense against
Metamorphic Malware
- Authors: Mohit Sewak and Sanjay K. Sahay and Hemant Rathore
- Abstract summary: We propose a novel mechanism to normalize metamorphic and obfuscated malware down at the opcode level.
We name this system DRLDO, for Deep Reinforcement Learning based De-Obfuscator.
- Score: 2.4493299476776778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel mechanism to normalize metamorphic and
obfuscated malware down at the opcode level and hence create an advanced
metamorphic malware de-obfuscation and defense system. We name this system
DRLDO, for Deep Reinforcement Learning based De-Obfuscator. With the inclusion
of the DRLDO as a sub-component, an existing Intrusion Detection System could
be augmented with defensive capabilities against 'zero-day' attacks from
obfuscated and metamorphic variants of existing malware. This gains importance,
not only because there exists no system to date that uses advanced DRL to
intelligently and automatically normalize obfuscation down even to the opcode
level, but also because the DRLDO system does not mandate any changes to the
existing IDS. The DRLDO system does not even mandate the IDS' classifier to be
retrained with any new dataset containing obfuscated samples. Hence DRLDO could
be easily retrofitted into any existing IDS deployment. We designed, developed,
and conducted experiments on the system to evaluate the same against
multiple-simultaneous attacks from obfuscations generated from malware samples
from a standardized dataset that contains multiple generations of malware.
Experimental results prove that DRLDO was able to successfully make the
otherwise un-detectable obfuscated variants of the malware detectable by an
existing pre-trained malware classifier. The detection probability was raised
well above the cut-off mark to 0.6 for the classifier to detect the obfuscated
malware unambiguously. Further, the de-obfuscated variants generated by DRLDO
achieved a very high correlation (of 0.99) with the base malware. This
observation validates that the DRLDO system is actually learning to
de-obfuscate and not exploiting a trivial trick.
Related papers
- Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - Evading Deep Learning-Based Malware Detectors via Obfuscation: A Deep
Reinforcement Learning Approach [8.702462580001727]
Adversarial Malware Generation (AMG) is the generation of adversarial malware variants to strengthen Deep Learning (DL)-based malware detectors.
In this study, we show that an open-source encryption tool coupled with a Reinforcement Learning (RL) framework can successfully obfuscate malware.
Our results show that the proposed method improves the evasion rate from 27%-49% compared to widely-used state-of-the-art reinforcement learning-based methods.
arXiv Detail & Related papers (2024-02-04T20:23:15Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A
Causal Language Model Approach [5.2424255020469595]
Adversarial Malware example Generation aims to generate evasive malware variants.
Black-box method has gained more attention than white-box methods.
In this study, we show that a novel DL-based causal language model enables single-shot evasion.
arXiv Detail & Related papers (2021-12-03T05:29:50Z) - ADVERSARIALuscator: An Adversarial-DRL Based Obfuscator and Metamorphic
Malware SwarmGenerator [2.4493299476776778]
We present ADVERSARIALuscator, a novel system that uses specialized Adversarial-DRL to obfuscate malware at the opcode level.
A ADVERSARIALuscator could be used to generate data representative of a swarm of AI-based metamorphic malware attacks.
arXiv Detail & Related papers (2021-09-23T10:50:41Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - DOOM: A Novel Adversarial-DRL-Based Op-Code Level Metamorphic Malware
Obfuscator for the Enhancement of IDS [1.933681537640272]
Adrial-DRL based Opcode level Obfuscator to generate Metamorphic malware.
Novel system that uses deep learning to obcate adversarial malware.
arXiv Detail & Related papers (2020-10-16T19:57:06Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.