Genetic Algorithm-Based Dynamic Backdoor Attack on Federated
Learning-Based Network Traffic Classification
- URL: http://arxiv.org/abs/2310.06855v1
- Date: Wed, 27 Sep 2023 14:02:02 GMT
- Title: Genetic Algorithm-Based Dynamic Backdoor Attack on Federated
Learning-Based Network Traffic Classification
- Authors: Mahmoud Nazzal, Nura Aljaafari, Ahmed Sawalmeh, Abdallah Khreishah,
Muhammad Anan, Abdulelah Algosaibi, Mohammed Alnaeem, Adel Aldalbahi,
Abdulaziz Alhumam, Conrado P. Vizcarra, and Shadan Alhamed
- Abstract summary: We propose GABAttack, a novel genetic algorithm-based backdoor attack against federated learning for network traffic classification.
This research serves as an alarming call for network security experts and practitioners to develop robust defense measures against such attacks.
- Score: 1.1887808102491482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables multiple clients to collaboratively contribute to
the learning of a global model orchestrated by a central server. This learning
scheme promotes clients' data privacy and requires reduced communication
overheads. In an application like network traffic classification, this helps
hide the network vulnerabilities and weakness points. However, federated
learning is susceptible to backdoor attacks, in which adversaries inject
manipulated model updates into the global model. These updates inject a salient
functionality in the global model that can be launched with specific input
patterns. Nonetheless, the vulnerability of network traffic classification
models based on federated learning to these attacks remains unexplored. In this
paper, we propose GABAttack, a novel genetic algorithm-based backdoor attack
against federated learning for network traffic classification. GABAttack
utilizes a genetic algorithm to optimize the values and locations of backdoor
trigger patterns, ensuring a better fit with the input and the model. This
input-tailored dynamic attack is promising for improved attack evasiveness
while being effective. Extensive experiments conducted over real-world network
datasets validate the success of the proposed GABAttack in various situations
while maintaining almost invisible activity. This research serves as an
alarming call for network security experts and practitioners to develop robust
defense measures against such attacks.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - On the Vulnerability of Backdoor Defenses for Federated Learning [8.345632941376673]
Federated Learning (FL) is a popular distributed machine learning paradigm that enables jointly training a global model without sharing clients' data.
In this paper, we study whether the current defense mechanisms truly neutralize the backdoor threats from federated learning.
We propose a new federated backdoor attack method for possible countermeasures.
arXiv Detail & Related papers (2023-01-19T17:02:02Z) - SPIN: Simulated Poisoning and Inversion Network for Federated
Learning-Based 6G Vehicular Networks [9.494669823390648]
Vehicular networks have always faced data privacy preservation concerns.
The technique is quite vulnerable to model inversion and model poisoning attacks.
We propose simulated poisoning and inversion network (SPIN) that leverages the optimization approach for reconstructing data.
arXiv Detail & Related papers (2022-11-21T10:07:13Z) - Zero Day Threat Detection Using Metric Learning Autoencoders [3.1965908200266173]
The proliferation of zero-day threats (ZDTs) to companies' networks has been immensely costly.
Deep learning methods are an attractive option for their ability to capture highly-nonlinear behavior patterns.
The models presented here are also trained and evaluated with two more datasets, and continue to show promising results even when generalizing to new network topologies.
arXiv Detail & Related papers (2022-11-01T13:12:20Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Dynamic backdoor attacks against federated learning [0.5482532589225553]
Federated Learning (FL) is a new machine learning framework, which enables millions of participants to collaboratively train model without compromising data privacy and security.
In this paper, we focus on dynamic backdoor attacks under FL setting, where the goal of the adversary is to reduce the performance of the model on targeted tasks.
To the best of our knowledge, this is the first paper that focus on dynamic backdoor attacks research under FL setting.
arXiv Detail & Related papers (2020-11-15T01:32:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.