DOOM: A Novel Adversarial-DRL-Based Op-Code Level Metamorphic Malware
Obfuscator for the Enhancement of IDS
- URL: http://arxiv.org/abs/2010.08608v1
- Date: Fri, 16 Oct 2020 19:57:06 GMT
- Title: DOOM: A Novel Adversarial-DRL-Based Op-Code Level Metamorphic Malware
Obfuscator for the Enhancement of IDS
- Authors: Mohit Sewak, Sanjay K. Sahay and Hemant Rathore
- Abstract summary: Adrial-DRL based Opcode level Obfuscator to generate Metamorphic malware.
Novel system that uses deep learning to obcate adversarial malware.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We designed and developed DOOM (Adversarial-DRL based Opcode level Obfuscator
to generate Metamorphic malware), a novel system that uses adversarial deep
reinforcement learning to obfuscate malware at the op-code level for the
enhancement of IDS. The ultimate goal of DOOM is not to give a potent weapon in
the hands of cyber-attackers, but to create defensive-mechanisms against
advanced zero-day attacks. Experimental results indicate that the obfuscated
malware created by DOOM could effectively mimic multiple-simultaneous zero-day
attacks. To the best of our knowledge, DOOM is the first system that could
generate obfuscated malware detailed to individual op-code level. DOOM is also
the first-ever system to use efficient continuous action control based deep
reinforcement learning in the area of malware generation and defense.
Experimental results indicate that over 67% of the metamorphic malware
generated by DOOM could easily evade detection from even the most potent IDS.
This achievement gains significance, as with this, even IDS augment with
advanced routing sub-system can be easily evaded by the malware generated by
DOOM.
Related papers
- MELON: Indirect Prompt Injection Defense via Masked Re-execution and Tool Comparison [60.30753230776882]
LLM agents are vulnerable to indirect prompt injection (IPI) attacks.
We present MELON, a novel IPI defense.
We show that MELON outperforms SOTA defenses in both attack prevention and utility preservation.
arXiv Detail & Related papers (2025-02-07T18:57:49Z) - Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - ASETF: A Novel Method for Jailbreak Attack on LLMs through Translate Suffix Embeddings [58.82536530615557]
We propose an Adversarial Suffix Embedding Translation Framework (ASETF) to transform continuous adversarial suffix embeddings into coherent and understandable text.
Our method significantly reduces the computation time of adversarial suffixes and achieves a much better attack success rate to existing techniques.
arXiv Detail & Related papers (2024-02-25T06:46:27Z) - Evading Deep Learning-Based Malware Detectors via Obfuscation: A Deep
Reinforcement Learning Approach [8.702462580001727]
Adversarial Malware Generation (AMG) is the generation of adversarial malware variants to strengthen Deep Learning (DL)-based malware detectors.
In this study, we show that an open-source encryption tool coupled with a Reinforcement Learning (RL) framework can successfully obfuscate malware.
Our results show that the proposed method improves the evasion rate from 27%-49% compared to widely-used state-of-the-art reinforcement learning-based methods.
arXiv Detail & Related papers (2024-02-04T20:23:15Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive
Learning [71.25518220297639]
Contrastive learning pre-trains general-purpose encoders using an unlabeled pre-training dataset.
DPBAs inject poisoned inputs into the pre-training dataset so the encoder is backdoored.
CorruptEncoder introduces a new attack strategy to create poisoned inputs and uses a theory-guided method to maximize attack effectiveness.
Our results show that our defense can reduce the effectiveness of DPBAs, but it sacrifices the utility of the encoder, highlighting the need for new defenses.
arXiv Detail & Related papers (2022-11-15T15:48:28Z) - Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A
Causal Language Model Approach [5.2424255020469595]
Adversarial Malware example Generation aims to generate evasive malware variants.
Black-box method has gained more attention than white-box methods.
In this study, we show that a novel DL-based causal language model enables single-shot evasion.
arXiv Detail & Related papers (2021-12-03T05:29:50Z) - Mate! Are You Really Aware? An Explainability-Guided Testing Framework
for Robustness of Malware Detectors [49.34155921877441]
We propose an explainability-guided and model-agnostic testing framework for robustness of malware detectors.
We then use this framework to test several state-of-the-art malware detectors' abilities to detect manipulated malware.
Our findings shed light on the limitations of current malware detectors, as well as how they can be improved.
arXiv Detail & Related papers (2021-11-19T08:02:38Z) - ADVERSARIALuscator: An Adversarial-DRL Based Obfuscator and Metamorphic
Malware SwarmGenerator [2.4493299476776778]
We present ADVERSARIALuscator, a novel system that uses specialized Adversarial-DRL to obfuscate malware at the opcode level.
A ADVERSARIALuscator could be used to generate data representative of a swarm of AI-based metamorphic malware attacks.
arXiv Detail & Related papers (2021-09-23T10:50:41Z) - DRLDO: A novel DRL based De-ObfuscationSystem for Defense against
Metamorphic Malware [2.4493299476776778]
We propose a novel mechanism to normalize metamorphic and obfuscated malware down at the opcode level.
We name this system DRLDO, for Deep Reinforcement Learning based De-Obfuscator.
arXiv Detail & Related papers (2021-02-01T15:16:18Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Feature-level Malware Obfuscation in Deep Learning [0.0]
We train a deep neural network classifier for malware classification using features of benign and malware samples.
We demonstrate a steep increase in false negative rate (i.e., attacks succeed) by randomly adding features of a benign app to malware.
We find that for API calls, it is possible to reject the vast majority of attacks, where using Intents or Permissions is less successful.
arXiv Detail & Related papers (2020-02-10T00:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.