Realistic Website Fingerprinting By Augmenting Network Trace
- URL: http://arxiv.org/abs/2309.10147v1
- Date: Mon, 18 Sep 2023 20:57:52 GMT
- Title: Realistic Website Fingerprinting By Augmenting Network Trace
- Authors: Alireza Bahramali, Ardavan Bozorgi, Amir Houmansadr
- Abstract summary: Website Fingerprinting (WF) is considered a major threat to the anonymity of Tor users.
We show that augmenting network traces can enhance the performance of WF classifiers in unobserved network conditions.
- Score: 17.590363320978415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Website Fingerprinting (WF) is considered a major threat to the anonymity of
Tor users (and other anonymity systems). While state-of-the-art WF techniques
have claimed high attack accuracies, e.g., by leveraging Deep Neural Networks
(DNN), several recent works have questioned the practicality of such WF attacks
in the real world due to the assumptions made in the design and evaluation of
these attacks. In this work, we argue that such impracticality issues are
mainly due to the attacker's inability in collecting training data in
comprehensive network conditions, e.g., a WF classifier may be trained only on
samples collected on specific high-bandwidth network links but deployed on
connections with different network conditions. We show that augmenting network
traces can enhance the performance of WF classifiers in unobserved network
conditions. Specifically, we introduce NetAugment, an augmentation technique
tailored to the specifications of Tor traces. We instantiate NetAugment through
semi-supervised and self-supervised learning techniques. Our extensive
open-world and close-world experiments demonstrate that under practical
evaluation settings, our WF attacks provide superior performances compared to
the state-of-the-art; this is due to their use of augmented network traces for
training, which allows them to learn the features of target traffic in
unobserved settings. For instance, with a 5-shot learning in a closed-world
scenario, our self-supervised WF attack (named NetCLR) reaches up to 80%
accuracy when the traces for evaluation are collected in a setting unobserved
by the WF adversary. This is compared to an accuracy of 64.4% achieved by the
state-of-the-art Triplet Fingerprinting [35]. We believe that the promising
results of our work can encourage the use of network trace augmentation in
other types of network traffic analysis.
Related papers
- Seamless Website Fingerprinting in Multiple Environments [4.226243782049956]
Website fingerprinting (WF) attacks identify the websites visited over anonymized connections.
We introduce a new approach that classifies entire websites rather than individual web pages.
Our Convolutional Neural Network (CNN) uses only the jitter and size of 500 contiguous packets from any point in a TCP stream.
arXiv Detail & Related papers (2024-07-28T02:18:30Z) - A Measurement of Genuine Tor Traces for Realistic Website Fingerprinting [5.4482836906033585]
Website fingerprinting (WF) is a dangerous attack on web privacy because it enables an adversary to predict the website a user is visiting.
We present GTT23, the first WF dataset of genuine Tor traces, which we obtain through a large-scale measurement of the Tor network.
arXiv Detail & Related papers (2024-04-11T16:24:49Z) - Genetic Algorithm-Based Dynamic Backdoor Attack on Federated
Learning-Based Network Traffic Classification [1.1887808102491482]
We propose GABAttack, a novel genetic algorithm-based backdoor attack against federated learning for network traffic classification.
This research serves as an alarming call for network security experts and practitioners to develop robust defense measures against such attacks.
arXiv Detail & Related papers (2023-09-27T14:02:02Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.