A Measurement of Genuine Tor Traces for Realistic Website Fingerprinting
- URL: http://arxiv.org/abs/2404.07892v1
- Date: Thu, 11 Apr 2024 16:24:49 GMT
- Title: A Measurement of Genuine Tor Traces for Realistic Website Fingerprinting
- Authors: Rob Jansen, Ryan Wails, Aaron Johnson,
- Abstract summary: Website fingerprinting (WF) is a dangerous attack on web privacy because it enables an adversary to predict the website a user is visiting.
We present GTT23, the first WF dataset of genuine Tor traces, which we obtain through a large-scale measurement of the Tor network.
- Score: 5.4482836906033585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Website fingerprinting (WF) is a dangerous attack on web privacy because it enables an adversary to predict the website a user is visiting, despite the use of encryption, VPNs, or anonymizing networks such as Tor. Previous WF work almost exclusively uses synthetic datasets to evaluate the performance and estimate the feasibility of WF attacks despite evidence that synthetic data misrepresents the real world. In this paper we present GTT23, the first WF dataset of genuine Tor traces, which we obtain through a large-scale measurement of the Tor network. GTT23 represents real Tor user behavior better than any existing WF dataset, is larger than any existing WF dataset by at least an order of magnitude, and will help ground the future study of realistic WF attacks and defenses. In a detailed evaluation, we survey 25 WF datasets published over the last 15 years and compare their characteristics to those of GTT23. We discover common deficiencies of synthetic datasets that make them inferior to GTT23 for drawing meaningful conclusions about the effectiveness of WF attacks directed at real Tor users. We have made GTT23 available to promote reproducible research and to help inspire new directions for future work.
Related papers
- Seamless Website Fingerprinting in Multiple Environments [4.226243782049956]
Website fingerprinting (WF) attacks identify the websites visited over anonymized connections.
We introduce a new approach that classifies entire websites rather than individual web pages.
Our Convolutional Neural Network (CNN) uses only the jitter and size of 500 contiguous packets from any point in a TCP stream.
arXiv Detail & Related papers (2024-07-28T02:18:30Z) - DarkFed: A Data-Free Backdoor Attack in Federated Learning [12.598402035474718]
Federated learning (FL) has been demonstrated to be susceptible to backdoor attacks.
We propose a data-free approach to backdoor FL using a shadow dataset.
Our exploration reveals that impressive attack performance can be achieved, even when there is a substantial gap between the shadow dataset and the main task dataset.
arXiv Detail & Related papers (2024-05-06T09:21:15Z) - Advancing DDoS Attack Detection: A Synergistic Approach Using Deep
Residual Neural Networks and Synthetic Oversampling [2.988269372716689]
We introduce an enhanced approach for DDoS attack detection by leveraging the capabilities of Deep Residual Neural Networks (ResNets)
We balance the representation of benign and malicious data points, enabling the model to better discern intricate patterns indicative of an attack.
Experimental results on a real-world dataset demonstrate that our approach achieves an accuracy of 99.98%, significantly outperforming traditional methods.
arXiv Detail & Related papers (2024-01-06T03:03:52Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Realistic Website Fingerprinting By Augmenting Network Trace [17.590363320978415]
Website Fingerprinting (WF) is considered a major threat to the anonymity of Tor users.
We show that augmenting network traces can enhance the performance of WF classifiers in unobserved network conditions.
arXiv Detail & Related papers (2023-09-18T20:57:52Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Efficient and Low Overhead Website Fingerprinting Attacks and Defenses
based on TCP/IP Traffic [16.6602652644935]
Website fingerprinting attacks based on machine learning and deep learning tend to use the most typical features to achieve a satisfactory performance of attacking rate.
To defend against such attacks, random packet defense (RPD) with a high cost of excessive network overhead is usually applied.
We propose a filter-assisted attack against RPD, which can filter out the injected noises using the statistical characteristics of TCP/IP traffic.
We further improve the list-based defense by a traffic splitting mechanism, which can combat the mentioned attacks as well as save a considerable amount of network overhead.
arXiv Detail & Related papers (2023-02-27T13:45:15Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.