Efficient and Low Overhead Website Fingerprinting Attacks and Defenses
based on TCP/IP Traffic
- URL: http://arxiv.org/abs/2302.13763v1
- Date: Mon, 27 Feb 2023 13:45:15 GMT
- Title: Efficient and Low Overhead Website Fingerprinting Attacks and Defenses
based on TCP/IP Traffic
- Authors: Guodong Huang, Chuan Ma, Ming Ding, Yuwen Qian, Chunpeng Ge, Liming
Fang, Zhe Liu
- Abstract summary: Website fingerprinting attacks based on machine learning and deep learning tend to use the most typical features to achieve a satisfactory performance of attacking rate.
To defend against such attacks, random packet defense (RPD) with a high cost of excessive network overhead is usually applied.
We propose a filter-assisted attack against RPD, which can filter out the injected noises using the statistical characteristics of TCP/IP traffic.
We further improve the list-based defense by a traffic splitting mechanism, which can combat the mentioned attacks as well as save a considerable amount of network overhead.
- Score: 16.6602652644935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Website fingerprinting attack is an extensively studied technique used in a
web browser to analyze traffic patterns and thus infer confidential information
about users. Several website fingerprinting attacks based on machine learning
and deep learning tend to use the most typical features to achieve a
satisfactory performance of attacking rate. However, these attacks suffer from
several practical implementation factors, such as a skillfully pre-processing
step or a clean dataset. To defend against such attacks, random packet defense
(RPD) with a high cost of excessive network overhead is usually applied. In
this work, we first propose a practical filter-assisted attack against RPD,
which can filter out the injected noises using the statistical characteristics
of TCP/IP traffic. Then, we propose a list-assisted defensive mechanism to
defend the proposed attack method. To achieve a configurable trade-off between
the defense and the network overhead, we further improve the list-based defense
by a traffic splitting mechanism, which can combat the mentioned attacks as
well as save a considerable amount of network overhead. In the experiments, we
collect real-life traffic patterns using three mainstream browsers, i.e.,
Microsoft Edge, Google Chrome, and Mozilla Firefox, and extensive results
conducted on the closed and open-world datasets show the effectiveness of the
proposed algorithms in terms of defense accuracy and network efficiency.
Related papers
- Seamless Website Fingerprinting in Multiple Environments [4.226243782049956]
Website fingerprinting (WF) attacks identify the websites visited over anonymized connections.
We introduce a new approach that classifies entire websites rather than individual web pages.
Our Convolutional Neural Network (CNN) uses only the jitter and size of 500 contiguous packets from any point in a TCP stream.
arXiv Detail & Related papers (2024-07-28T02:18:30Z) - Baseline Defenses for Adversarial Attacks Against Aligned Language
Models [109.75753454188705]
Recent work shows that text moderations can produce jailbreaking prompts that bypass defenses.
We look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training.
We find that the weakness of existing discretes for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
arXiv Detail & Related papers (2023-09-01T17:59:44Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Adversarial Attacks Neutralization via Data Set Randomization [3.655021726150369]
Adversarial attacks on deep learning models pose a serious threat to their reliability and security.
We propose a new defense mechanism that is rooted on hyperspace projection.
We show that our solution increases the robustness of deep learning models against adversarial attacks.
arXiv Detail & Related papers (2023-06-21T10:17:55Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Progressive Defense Against Adversarial Attacks for Deep Learning as a
Service in Internet of Things [9.753864027359521]
Some Deep Neural Networks (DNN) can be easily misled by adding relatively small but adversarial perturbations to the input.
We present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations.
The result shows it outperforms the state-of-the-art while reducing the cost of model training by 50% on average.
arXiv Detail & Related papers (2020-10-15T06:40:53Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.