Poisoning Network Flow Classifiers
- URL: http://arxiv.org/abs/2306.01655v1
- Date: Fri, 2 Jun 2023 16:24:15 GMT
- Title: Poisoning Network Flow Classifiers
- Authors: Giorgio Severi, Simona Boboila, Alina Oprea, John Holodnak, Kendra
Kratkiewicz, Jason Matterer
- Abstract summary: This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers.
We investigate the challenging scenario of clean-label poisoning where the adversary's capabilities are constrained to tampering only with the training data.
We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates.
- Score: 10.055241826257083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning (ML) classifiers increasingly oversee the automated
monitoring of network traffic, studying their resilience against adversarial
attacks becomes critical. This paper focuses on poisoning attacks, specifically
backdoor attacks, against network traffic flow classifiers. We investigate the
challenging scenario of clean-label poisoning where the adversary's
capabilities are constrained to tampering only with the training data - without
the ability to arbitrarily modify the training labels or any other component of
the training process. We describe a trigger crafting strategy that leverages
model interpretability techniques to generate trigger patterns that are
effective even at very low poisoning rates. Finally, we design novel strategies
to generate stealthy triggers, including an approach based on generative
Bayesian network models, with the goal of minimizing the conspicuousness of the
trigger, and thus making detection of an ongoing poisoning campaign more
challenging. Our findings provide significant insights into the feasibility of
poisoning attacks on network traffic classifiers used in multiple scenarios,
including detecting malicious communication and application classification.
Related papers
- A Practical Trigger-Free Backdoor Attack on Neural Networks [33.426207982772226]
We propose a trigger-free backdoor attack that does not require access to any training data.
Specifically, we design a novel fine-tuning approach that incorporates the concept of malicious data into the concept of the attacker-specified class.
The effectiveness, practicality, and stealthiness of the proposed attack are evaluated on three real-world datasets.
arXiv Detail & Related papers (2024-08-21T08:53:36Z) - Model-agnostic clean-label backdoor mitigation in cybersecurity environments [6.857489153636145]
Recent research has surfaced a series of insidious training-time attacks that inject backdoors in models designed for security classification tasks.
We propose new techniques that leverage insights in cybersecurity threat models to effectively mitigate these clean-label poisoning attacks.
arXiv Detail & Related papers (2024-07-11T03:25:40Z) - Pre-trained Trojan Attacks for Visual Recognition [106.13792185398863]
Pre-trained vision models (PVMs) have become a dominant component due to their exceptional performance when fine-tuned for downstream tasks.
We propose the Pre-trained Trojan attack, which embeds backdoors into a PVM, enabling attacks across various downstream vision tasks.
We highlight the challenges posed by cross-task activation and shortcut connections in successful backdoor attacks.
arXiv Detail & Related papers (2023-12-23T05:51:40Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - AntidoteRT: Run-time Detection and Correction of Poison Attacks on
Neural Networks [18.461079157949698]
backdoor poisoning attacks against image classification networks.
We propose lightweight automated detection and correction techniques against poisoning attacks.
Our technique outperforms existing defenses such as NeuralCleanse and STRIP on popular benchmarks.
arXiv Detail & Related papers (2022-01-31T23:42:32Z) - FooBaR: Fault Fooling Backdoor Attack on Neural Network Training [5.639451539396458]
We explore a novel attack paradigm by injecting faults during the training phase of a neural network in a way that the resulting network can be attacked during deployment without the necessity of further faulting.
We call such attacks fooling backdoors as the fault attacks at the training phase inject backdoors into the network that allow an attacker to produce fooling inputs.
arXiv Detail & Related papers (2021-09-23T09:43:19Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.