Blind Adversarial Network Perturbations
- URL: http://arxiv.org/abs/2002.06495v1
- Date: Sun, 16 Feb 2020 02:59:41 GMT
- Title: Blind Adversarial Network Perturbations
- Authors: Milad Nasr, Alireza Bahramali, Amir Houmansadr
- Abstract summary: We show that an adversary can defeat traffic analysis techniques by applying emphadversarial perturbations on the patterns of emphlive network traffic.
- Score: 33.121816204736035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are commonly used for various traffic analysis
problems, such as website fingerprinting and flow correlation, as they
outperform traditional (e.g., statistical) techniques by large margins.
However, deep neural networks are known to be vulnerable to adversarial
examples: adversarial inputs to the model that get labeled incorrectly by the
model due to small adversarial perturbations. In this paper, for the first
time, we show that an adversary can defeat DNN-based traffic analysis
techniques by applying \emph{adversarial perturbations} on the patterns of
\emph{live} network traffic.
Related papers
- A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - An Adversarial Robustness Perspective on the Topology of Neural Networks [12.416690940269772]
We study the impact of neural networks (NNs) topology on adversarial robustness.
We find that graphs from clean inputs are more centralized around highway edges, whereas those from adversaries are more diffuse.
arXiv Detail & Related papers (2022-11-04T18:00:53Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Adversarial Attack via Dual-Stage Network Erosion [7.28871533402894]
Deep neural networks are vulnerable to adversarial examples, which can fool deep models by adding subtle perturbations.
This paper proposes to improve the transferability of adversarial examples, and applies dual-stage feature-level perturbations to an existing model to implicitly create a set of diverse models.
We conduct comprehensive experiments both on non-residual and residual networks, and obtain more transferable adversarial examples with the computational cost similar to the state-of-the-art method.
arXiv Detail & Related papers (2022-01-01T02:38:09Z) - Spatially Focused Attack against Spatiotemporal Graph Neural Networks [8.665638585791235]
Deep Spatiotemporal graph neural networks (GNNs) have achieved great success in traffic forecasting applications.
If GNNs are vulnerable in real-world prediction applications, a hacker can easily manipulate the results and cause serious traffic congestion and even a city-scale breakdown.
arXiv Detail & Related papers (2021-09-10T01:31:53Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Generating Adversarial Examples with Graph Neural Networks [26.74003742013481]
We propose a novel attack based on a graph neural network (GNN) that takes advantage of the strengths of both approaches.
We show that our method beats state-of-the-art adversarial attacks, including PGD-attack, MI-FGSM, and Carlini and Wagner attack.
We provide a new challenging dataset specifically designed to allow for a more illustrative comparison of adversarial attacks.
arXiv Detail & Related papers (2021-05-30T22:46:41Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Detecting Adversarial Examples by Input Transformations, Defense
Perturbations, and Voting [71.57324258813674]
convolutional neural networks (CNNs) have proved to reach super-human performance in visual recognition tasks.
CNNs can easily be fooled by adversarial examples, i.e., maliciously-crafted images that force the networks to predict an incorrect output.
This paper extensively explores the detection of adversarial examples via image transformations and proposes a novel methodology.
arXiv Detail & Related papers (2021-01-27T14:50:41Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.