Amoeba: Circumventing ML-supported Network Censorship via Adversarial
Reinforcement Learning
- URL: http://arxiv.org/abs/2310.20469v1
- Date: Tue, 31 Oct 2023 14:01:24 GMT
- Title: Amoeba: Circumventing ML-supported Network Censorship via Adversarial
Reinforcement Learning
- Authors: Haoyu Liu, Alec F. Diallo and Paul Patras
- Abstract summary: Recent advances in machine learning enable detecting a range of anti-censorship systems by learning distinct statistical patterns hidden in traffic flows.
In this paper, we formulate a practical adversarial attack strategy against flow classifiers as a method for circumventing censorship.
We show that Amoeba can effectively shape adversarial flows that have on average 94% attack success rate against a range of ML algorithms.
- Score: 8.788469979827484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Embedding covert streams into a cover channel is a common approach to
circumventing Internet censorship, due to censors' inability to examine
encrypted information in otherwise permitted protocols (Skype, HTTPS, etc.).
However, recent advances in machine learning (ML) enable detecting a range of
anti-censorship systems by learning distinct statistical patterns hidden in
traffic flows. Therefore, designing obfuscation solutions able to generate
traffic that is statistically similar to innocuous network activity, in order
to deceive ML-based classifiers at line speed, is difficult.
In this paper, we formulate a practical adversarial attack strategy against
flow classifiers as a method for circumventing censorship. Specifically, we
cast the problem of finding adversarial flows that will be misclassified as a
sequence generation task, which we solve with Amoeba, a novel reinforcement
learning algorithm that we design. Amoeba works by interacting with censoring
classifiers without any knowledge of their model structure, but by crafting
packets and observing the classifiers' decisions, in order to guide the
sequence generation process. Our experiments using data collected from two
popular anti-censorship systems demonstrate that Amoeba can effectively shape
adversarial flows that have on average 94% attack success rate against a range
of ML algorithms. In addition, we show that these adversarial flows are robust
in different network environments and possess transferability across various ML
models, meaning that once trained against one, our agent can subvert other
censoring classifiers without retraining.
Related papers
- Countermeasures Against Adversarial Examples in Radio Signal Classification [22.491016049845083]
We propose for the first time a countermeasure against adversarial examples in modulation classification.
Our results demonstrate that the proposed countermeasure can protect deep-learning based modulation classification systems against adversarial examples.
arXiv Detail & Related papers (2024-07-09T12:08:50Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Poisoning Network Flow Classifiers [10.055241826257083]
This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers.
We investigate the challenging scenario of clean-label poisoning where the adversary's capabilities are constrained to tampering only with the training data.
We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates.
arXiv Detail & Related papers (2023-06-02T16:24:15Z) - Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial
Network Packet Generation [3.5574619538026044]
Recent advancements in artificial intelligence (AI) and machine learning (ML) algorithms have enhanced the security posture of cybersecurity operations centers (defenders)
Recent studies have found that the perturbation of flow-based and packet-based features can deceive ML models, but these approaches have limitations.
Our framework, Deep PackGen, employs deep reinforcement learning to generate adversarial packets and aims to overcome the limitations of approaches in the literature.
arXiv Detail & Related papers (2023-05-18T15:32:32Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Adversarial Machine Learning in Network Intrusion Detection Systems [6.18778092044887]
We study the nature of the adversarial problem in Network Intrusion Detection Systems.
We use evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation.
Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.
arXiv Detail & Related papers (2020-04-23T19:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.