Discovering Command and Control (C2) Channels on Tor and Public Networks
Using Reinforcement Learning
- URL: http://arxiv.org/abs/2402.09200v1
- Date: Wed, 14 Feb 2024 14:33:17 GMT
- Title: Discovering Command and Control (C2) Channels on Tor and Public Networks
Using Reinforcement Learning
- Authors: Cheng Wang, Christopher Redino, Abdul Rahman, Ryan Clark, Daniel
Radke, Tyler Cody, Dhruv Nandakumar, Edward Bowen
- Abstract summary: We propose a reinforcement learning (RL) based approach to emulate C2 attack campaigns using both the normal (public) and the Tor networks.
Results on a typical network configuration show that the RL agent can automatically discover resilient C2 attack paths utilizing both Tor-based and conventional communication channels.
- Score: 7.8524872849337655
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Command and control (C2) channels are an essential component of many types of
cyber attacks, as they enable attackers to remotely control their
malware-infected machines and execute harmful actions, such as propagating
malicious code across networks, exfiltrating confidential data, or initiating
distributed denial of service (DDoS) attacks. Identifying these C2 channels is
therefore crucial in helping to mitigate and prevent cyber attacks. However,
identifying C2 channels typically involves a manual process, requiring deep
knowledge and expertise in cyber operations. In this paper, we propose a
reinforcement learning (RL) based approach to automatically emulate C2 attack
campaigns using both the normal (public) and the Tor networks. In addition,
payload size and network firewalls are configured to simulate real-world attack
scenarios. Results on a typical network configuration show that the RL agent
can automatically discover resilient C2 attack paths utilizing both Tor-based
and conventional communication channels, while also bypassing network
firewalls.
Related papers
- Striking Back At Cobalt: Using Network Traffic Metadata To Detect Cobalt Strike Masquerading Command and Control Channels [0.22499166814992436]
Off-the-shelf software for Command and Control is often used by attackers and legitimate pentesters.<n>Cobalt Strike is one of the most famous solutions in this category, used by known advanced attacker groups such as "Mustang Panda" or "Nobelium"
arXiv Detail & Related papers (2025-06-10T15:47:22Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - A Channel-Triggered Backdoor Attack on Wireless Semantic Image Reconstruction [12.368852420763782]
We propose a novel attack paradigm, termed Channel-Triggered Backdoor Attack (CT-BA)
We utilize channel gain with different fading distributions or channel noise with different power spectral densities as potential triggers.
We evaluate the robustness of CT-BA on a ViT-based Joint Source-Channel Coding (JSCC) model across three datasets.
arXiv Detail & Related papers (2025-03-31T09:17:10Z) - Hybrid Deep Learning Model for Multiple Cache Side Channel Attacks Detection: A Comparative Analysis [0.0]
Cache side channel attacks leverage weaknesses in shared computational resources.<n>This study focuses on a specific class of these threats: fingerprinting attacks.<n>A hybrid deep learning model is proposed for detecting cache side channel attacks.
arXiv Detail & Related papers (2025-01-28T18:14:43Z) - PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in CPS/IoT Environments [0.6629765271909503]
We introduce textttPCAP-Backdoor, a novel technique that facilitates backdoor poisoning attacks on PCAP datasets.
Experiments on real-world Cyber-Physical Systems (CPS) and Internet of Things (IoT) network traffic datasets demonstrate that attackers can effectively backdoor a model by poisoning as little as 1% or less of the entire training dataset.
arXiv Detail & Related papers (2025-01-26T15:49:34Z) - Toward Mixture-of-Experts Enabled Trustworthy Semantic Communication for 6G Networks [82.3753728955968]
We introduce a novel Mixture-of-Experts (MoE)-based SemCom system.
This system comprises a gating network and multiple experts, each specializing in different security challenges.
The gating network adaptively selects suitable experts to counter heterogeneous attacks based on user-defined security requirements.
A case study in vehicular networks demonstrates the efficacy of the MoE-based SemCom system.
arXiv Detail & Related papers (2024-09-24T03:17:51Z) - Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations [7.361316528368866]
This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks.
By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly.
Experimental results on a 152-host example network confirm the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-06-25T14:16:40Z) - Discovering Command and Control Channels Using Reinforcement Learning [6.1248699897810726]
Reinforcement learning approach learns to automatically carry out C2 attack campaigns on large networks.
In this paper, we model C2 traffic flow as a three-stage process and formulate it as a Markov decision process.
The method is evaluated on a large network with more than a thousand hosts and the results demonstrate that the agent can effectively learn attack paths while avoiding firewalls.
arXiv Detail & Related papers (2024-01-13T20:03:11Z) - Instruct2Attack: Language-Guided Semantic Adversarial Attacks [76.83548867066561]
Instruct2Attack (I2A) is a language-guided semantic attack that generates meaningful perturbations according to free-form language instructions.
We make use of state-of-the-art latent diffusion models, where we adversarially guide the reverse diffusion process to search for an adversarial latent code conditioned on the input image and text instruction.
We show that I2A can successfully break state-of-the-art deep neural networks even under strong adversarial defenses.
arXiv Detail & Related papers (2023-11-27T05:35:49Z) - Breaking On-Chip Communication Anonymity using Flow Correlation Attacks [2.977255700811213]
We investigate the security strength of existing anonymous routing protocols in Network-on-Chip (NoC) architectures.
We show that the existing anonymous routing is vulnerable to machine learning (ML) based flow correlation attacks on NoCs.
We propose lightweight anonymous routing with traffic obfuscation techniques to defend against ML-based flow correlation attacks.
arXiv Detail & Related papers (2023-09-27T14:32:39Z) - Channel-wise Gated Res2Net: Towards Robust Detection of Synthetic Speech
Attacks [67.7648985513978]
Existing approaches for anti-spoofing in automatic speaker verification (ASV) still lack generalizability to unseen attacks.
We present a novel, channel-wise gated Res2Net (CG-Res2Net), which modifies Res2Net to enable a channel-wise gating mechanism.
arXiv Detail & Related papers (2021-07-19T12:27:40Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Robust and Verifiable Information Embedding Attacks to Deep Neural
Networks via Error-Correcting Codes [81.85509264573948]
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier.
In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool.
In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods.
arXiv Detail & Related papers (2020-10-26T17:42:42Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.