FedDefender: Client-Side Attack-Tolerant Federated Learning
- URL: http://arxiv.org/abs/2307.09048v1
- Date: Tue, 18 Jul 2023 08:00:41 GMT
- Title: FedDefender: Client-Side Attack-Tolerant Federated Learning
- Authors: Sungwon Park, Sungwon Han, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie
and Meeyoung Cha
- Abstract summary: Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
- Score: 60.576073964874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables learning from decentralized data sources without
compromising privacy, which makes it a crucial technique. However, it is
vulnerable to model poisoning attacks, where malicious clients interfere with
the training process. Previous defense mechanisms have focused on the
server-side by using careful model aggregation, but this may not be effective
when the data is not identically distributed or when attackers can access the
information of benign clients. In this paper, we propose a new defense
mechanism that focuses on the client-side, called FedDefender, to help benign
clients train robust local models and avoid the adverse impact of malicious
model updates from attackers, even when a server-side defense cannot identify
or remove adversaries. Our method consists of two main components: (1)
attack-tolerant local meta update and (2) attack-tolerant global knowledge
distillation. These components are used to find noise-resilient model
parameters while accurately extracting knowledge from a potentially corrupted
global model. Our client-side defense strategy has a flexible structure and can
work in conjunction with any existing server-side strategies. Evaluations of
real-world scenarios across multiple datasets show that the proposed method
enhances the robustness of federated learning against model poisoning attacks.
Related papers
- Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Learning to Detect Malicious Clients for Robust Federated Learning [20.5238037608738]
Federated learning systems are vulnerable to attacks from malicious clients.
We propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates.
arXiv Detail & Related papers (2020-02-01T14:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.