Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial
Network Packet Generation
- URL: http://arxiv.org/abs/2305.11039v1
- Date: Thu, 18 May 2023 15:32:32 GMT
- Title: Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial
Network Packet Generation
- Authors: Soumyadeep Hore and Jalal Ghadermazi and Diwas Paudel and Ankit Shah
and Tapas K. Das and Nathaniel D. Bastian
- Abstract summary: Recent advancements in artificial intelligence (AI) and machine learning (ML) algorithms have enhanced the security posture of cybersecurity operations centers (defenders)
Recent studies have found that the perturbation of flow-based and packet-based features can deceive ML models, but these approaches have limitations.
Our framework, Deep PackGen, employs deep reinforcement learning to generate adversarial packets and aims to overcome the limitations of approaches in the literature.
- Score: 3.5574619538026044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in artificial intelligence (AI) and machine learning (ML)
algorithms, coupled with the availability of faster computing infrastructure,
have enhanced the security posture of cybersecurity operations centers
(defenders) through the development of ML-aided network intrusion detection
systems (NIDS). Concurrently, the abilities of adversaries to evade security
have also increased with the support of AI/ML models. Therefore, defenders need
to proactively prepare for evasion attacks that exploit the detection
mechanisms of NIDS. Recent studies have found that the perturbation of
flow-based and packet-based features can deceive ML models, but these
approaches have limitations. Perturbations made to the flow-based features are
difficult to reverse-engineer, while samples generated with perturbations to
the packet-based features are not playable.
Our methodological framework, Deep PackGen, employs deep reinforcement
learning to generate adversarial packets and aims to overcome the limitations
of approaches in the literature. By taking raw malicious network packets as
inputs and systematically making perturbations on them, Deep PackGen
camouflages them as benign packets while still maintaining their functionality.
In our experiments, using publicly available data, Deep PackGen achieved an
average adversarial success rate of 66.4\% against various ML models and across
different attack types. Our investigation also revealed that more than 45\% of
the successful adversarial samples were out-of-distribution packets that evaded
the decision boundaries of the classifiers. The knowledge gained from our study
on the adversary's ability to make specific evasive perturbations to different
types of malicious packets can help defenders enhance the robustness of their
NIDS against evolving adversarial attacks.
Related papers
- Revolutionizing Payload Inspection: A Self-Supervised Journey to Precision with Few Shots [0.0]
Traditional security measures are inadequate against the sophistication of modern cyber attacks.
Deep Packet Inspection (DPI) has been pivotal in enhancing network security.
integration of advanced deep learning techniques with DPI has introduced modern methodologies into malware detection.
arXiv Detail & Related papers (2024-09-26T18:55:52Z) - Cyber Knowledge Completion Using Large Language Models [1.4883782513177093]
Integrating the Internet of Things (IoT) into Cyber-Physical Systems (CPSs) has expanded their cyber-attack surface.
Assessing the risks of CPSs is increasingly difficult due to incomplete and outdated cybersecurity knowledge.
Recent advancements in Large Language Models (LLMs) present a unique opportunity to enhance cyber-attack knowledge completion.
arXiv Detail & Related papers (2024-09-24T15:20:39Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - A Robust Adversary Detection-Deactivation Method for Metaverse-oriented
Collaborative Deep Learning [13.131323206843733]
This paper proposes an adversary detection-deactivation method, which can limit and isolate the access of potential malicious participants.
A detailed protection analysis has been conducted on a Multiview CDL case, and results show that the protocol can effectively prevent harmful access by manner analysis.
arXiv Detail & Related papers (2023-10-21T06:45:18Z) - Untargeted White-box Adversarial Attack with Heuristic Defence Methods
in Real-time Deep Learning based Network Intrusion Detection System [0.0]
In Adversarial Machine Learning (AML), malicious actors aim to fool the Machine Learning (ML) and Deep Learning (DL) models to produce incorrect predictions.
AML is an emerging research domain, and it has become a necessity for the in-depth study of adversarial attacks.
We implement four powerful adversarial attack techniques, namely, Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) in NIDS.
arXiv Detail & Related papers (2023-10-05T06:32:56Z) - Unveiling Vulnerabilities in Interpretable Deep Learning Systems with
Query-Efficient Black-box Attacks [16.13790238416691]
Interpretable Deep Learning Systems (IDLSes) are designed to make the system more transparent and explainable.
We propose a novel microbial genetic algorithm-based black-box attack against IDLSes that requires no prior knowledge of the target model and its interpretation model.
arXiv Detail & Related papers (2023-07-21T21:09:54Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.