DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning
- URL: http://arxiv.org/abs/2305.01267v1
- Date: Tue, 2 May 2023 09:04:34 GMT
- Title: DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning
- Authors: Wenqiang Sun, Sen Li, Yuchang Sun, Jun Zhang
- Abstract summary: Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of a central server.
The existence of a large number of heterogeneous devices makes FL vulnerable to various attacks, especially the stealthy backdoor attack.
We propose a new attack model for FL, namely Data-Agnostic Backdoor attack at the Server (DABS), where the server directly modifies the global model to backdoor an FL system.
- Score: 14.312593000209693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) attempts to train a global model by aggregating local
models from distributed devices under the coordination of a central server.
However, the existence of a large number of heterogeneous devices makes FL
vulnerable to various attacks, especially the stealthy backdoor attack.
Backdoor attack aims to trick a neural network to misclassify data to a target
label by injecting specific triggers while keeping correct predictions on
original training data. Existing works focus on client-side attacks which try
to poison the global model by modifying the local datasets. In this work, we
propose a new attack model for FL, namely Data-Agnostic Backdoor attack at the
Server (DABS), where the server directly modifies the global model to backdoor
an FL system. Extensive simulation results show that this attack scheme
achieves a higher attack success rate compared with baseline methods while
maintaining normal accuracy on the clean data.
Related papers
- BadSFL: Backdoor Attack against Scaffold Federated Learning [16.104941796138128]
Federated learning (FL) enables the training of deep learning models on distributed clients to preserve data privacy.
BadSFL is a novel backdoor attack method designed for the FL framework using the scaffold aggregation algorithm in non-IID settings.
BadSFL is effective over 60 rounds in the global model and up to 3 times longer than existing baseline attacks.
arXiv Detail & Related papers (2024-11-25T07:46:57Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - Data and Model Poisoning Backdoor Attacks on Wireless Federated
Learning, and the Defense Mechanisms: A Comprehensive Survey [28.88186038735176]
Federated Learning (FL) has been increasingly considered for applications to wireless communication networks (WCNs)
In general, non-independent and identically distributed (non-IID) data of WCNs raises concerns about robustness.
This survey provides a comprehensive review of the latest backdoor attacks and defense mechanisms.
arXiv Detail & Related papers (2023-12-14T05:52:29Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Backdoor Defense in Federated Learning Using Differential Testing and
Outlier Detection [24.562359531692504]
We propose DifFense, an automated defense framework to protect an FL system from backdoor attacks.
Our detection method reduces the average backdoor accuracy of the global model to below 4% and achieves a false negative rate of zero.
arXiv Detail & Related papers (2022-02-21T17:13:03Z) - Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis [49.38856542573576]
Edge devices in federated learning usually have much more limited computation and communication resources compared to servers in a data center.
In this work, we empirically demonstrate that Lottery Ticket models are equally vulnerable to backdoor attacks as the original dense models.
arXiv Detail & Related papers (2021-09-22T04:19:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.