ADVERSARIALuscator: An Adversarial-DRL Based Obfuscator and Metamorphic
Malware SwarmGenerator
- URL: http://arxiv.org/abs/2109.11542v1
- Date: Thu, 23 Sep 2021 10:50:41 GMT
- Title: ADVERSARIALuscator: An Adversarial-DRL Based Obfuscator and Metamorphic
Malware SwarmGenerator
- Authors: Mohit Sewak, Sanjay K. Sahay, Hemant Rathore
- Abstract summary: We present ADVERSARIALuscator, a novel system that uses specialized Adversarial-DRL to obfuscate malware at the opcode level.
A ADVERSARIALuscator could be used to generate data representative of a swarm of AI-based metamorphic malware attacks.
- Score: 2.4493299476776778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced metamorphic malware and ransomware, by using obfuscation, could
alter their internal structure with every attack. If such malware could intrude
even into any of the IoT networks, then even if the original malware instance
gets detected, by that time it can still infect the entire network. It is
challenging to obtain training data for such evasive malware. Therefore, in
this paper, we present ADVERSARIALuscator, a novel system that uses specialized
Adversarial-DRL to obfuscate malware at the opcode level and create multiple
metamorphic instances of the same. To the best of our knowledge,
ADVERSARIALuscator is the first-ever system that adopts the Markov Decision
Process-based approach to convert and find a solution to the problem of
creating individual obfuscations at the opcode level. This is important as the
machine language level is the least at which functionality could be preserved
so as to mimic an actual attack effectively. ADVERSARIALuscator is also the
first-ever system to use efficient continuous action control capable of deep
reinforcement learning agents like the Proximal Policy Optimization in the area
of cyber security. Experimental results indicate that ADVERSARIALuscator could
raise the metamorphic probability of a corpus of malware by >0.45.
Additionally, more than 33% of metamorphic instances generated by
ADVERSARIALuscator were able to evade the most potent IDS. If such malware
could intrude even into any of the IoT networks, then even if the original
malware instance gets detected, by that time it can still infect the entire
network. Hence ADVERSARIALuscator could be used to generate data representative
of a swarm of very potent and coordinated AI-based metamorphic malware attacks.
The so generated data and simulations could be used to bolster the defenses of
an IDS against an actual AI-based metamorphic attack from advanced malware and
ransomware.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A
Causal Language Model Approach [5.2424255020469595]
Adversarial Malware example Generation aims to generate evasive malware variants.
Black-box method has gained more attention than white-box methods.
In this study, we show that a novel DL-based causal language model enables single-shot evasion.
arXiv Detail & Related papers (2021-12-03T05:29:50Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - GANG-MAM: GAN based enGine for Modifying Android Malware [1.6799377888527687]
Malware detectors based on machine learning are vulnerable to adversarial attacks.
We propose a system that produces a feature vector for making an Android malware strongly evasive and then modify the malicious program accordingly.
arXiv Detail & Related papers (2021-09-27T18:36:20Z) - Evading Malware Classifiers via Monte Carlo Mutant Feature Discovery [23.294653273180472]
We show how a malicious actor trains a surrogate model to discover binary mutations that cause an instance to be misclassified.
Then, mutated malware is sent to the victim model that takes the place of an antivirus API to test whether it can evade detection.
arXiv Detail & Related papers (2021-06-15T03:31:02Z) - DRLDO: A novel DRL based De-ObfuscationSystem for Defense against
Metamorphic Malware [2.4493299476776778]
We propose a novel mechanism to normalize metamorphic and obfuscated malware down at the opcode level.
We name this system DRLDO, for Deep Reinforcement Learning based De-Obfuscator.
arXiv Detail & Related papers (2021-02-01T15:16:18Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.