Differential Degradation Vulnerabilities in Censorship Circumvention Systems
- URL: http://arxiv.org/abs/2409.06247v1
- Date: Tue, 10 Sep 2024 06:31:17 GMT
- Title: Differential Degradation Vulnerabilities in Censorship Circumvention Systems
- Authors: Zhen Sun, Vitaly Shmatikov,
- Abstract summary: We present effective differential degradation attacks against Snowflake and Protozoa.
We explain the root cause of these vulnerabilities, analyze the tradeoffs faced by the designers of circumvention systems.
We propose a modified version of Protozoa that resists differential degradation attacks.
- Score: 13.56032544967416
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Several recently proposed censorship circumvention systems use encrypted network channels of popular applications to hide their communications. For example, a Tor pluggable transport called Snowflake uses the WebRTC data channel, while a system called Protozoa substitutes content in a WebRTC video-call application. By using the same channel as the cover application and (in the case of Protozoa) matching its observable traffic characteristics, these systems aim to resist powerful network-based censors capable of large-scale traffic analysis. Protozoa, in particular, achieves a strong indistinguishability property known as behavioral independence. We demonstrate that this class of systems is generically vulnerable to a new type of active attacks we call "differential degradation." These attacks do not require multi-flow measurements or traffic classification and are thus available to all real-world censors. They exploit the discrepancies between the respective network requirements of the circumvention system and its cover application. We show how a censor can use the minimal application-level information exposed by WebRTC to create network conditions that cause the circumvention system to suffer a much bigger degradation in performance than the cover application. Even when the attack causes no observable differences in network traffic and behavioral independence still holds, the censor can block circumvention at a low cost, without resorting to traffic analysis, and with minimal collateral damage to non-circumvention users. We present effective differential degradation attacks against Snowflake and Protozoa. We explain the root cause of these vulnerabilities, analyze the tradeoffs faced by the designers of circumvention systems, and propose a modified version of Protozoa that resists differential degradation attacks.
Related papers
- Amoeba: Circumventing ML-supported Network Censorship via Adversarial
Reinforcement Learning [8.788469979827484]
Recent advances in machine learning enable detecting a range of anti-censorship systems by learning distinct statistical patterns hidden in traffic flows.
In this paper, we formulate a practical adversarial attack strategy against flow classifiers as a method for circumventing censorship.
We show that Amoeba can effectively shape adversarial flows that have on average 94% attack success rate against a range of ML algorithms.
arXiv Detail & Related papers (2023-10-31T14:01:24Z) - Novelty Detection in Network Traffic: Using Survival Analysis for
Feature Identification [1.933681537640272]
Intrusion Detection Systems are an important component of many organizations' cyber defense and resiliency strategies.
One downside of these systems is their reliance on known attack signatures for detection of malicious network events.
We introduce an unconventional approach to identifying network traffic features that influence novelty detection based on survival analysis techniques.
arXiv Detail & Related papers (2023-01-16T01:40:29Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Channel-wise Gated Res2Net: Towards Robust Detection of Synthetic Speech
Attacks [67.7648985513978]
Existing approaches for anti-spoofing in automatic speaker verification (ASV) still lack generalizability to unseen attacks.
We present a novel, channel-wise gated Res2Net (CG-Res2Net), which modifies Res2Net to enable a channel-wise gating mechanism.
arXiv Detail & Related papers (2021-07-19T12:27:40Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - DNS Covert Channel Detection via Behavioral Analysis: a Machine Learning
Approach [0.09176056742068815]
We propose an effective covert channel detection method based on the analysis of DNS network data passively extracted from a network monitoring system.
The proposed solution has been evaluated over a 15-day-long experimental session with the injection of traffic that covers the most relevant exfiltration and tunneling attacks.
arXiv Detail & Related papers (2020-10-04T13:28:28Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Enhancing Robustness Against Adversarial Examples in Network Intrusion
Detection Systems [1.7386735294534732]
RePO is a new mechanism to build an NIDS with the help of denoising autoencoders capable of detecting different types of network attacks in a low false alert setting.
Our evaluation shows denoising autoencoders can improve detection of malicious traffic by up to 29% in a normal setting and by up to 45% in an adversarial setting.
arXiv Detail & Related papers (2020-08-09T07:04:06Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.