FedRecover: Recovering from Poisoning Attacks in Federated Learning
using Historical Information
- URL: http://arxiv.org/abs/2210.10936v1
- Date: Thu, 20 Oct 2022 00:12:34 GMT
- Title: FedRecover: Recovering from Poisoning Attacks in Federated Learning
using Historical Information
- Authors: Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang and Neil Zhenqiang Gong
- Abstract summary: Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model.
We propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients.
- Score: 67.8846134295194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is vulnerable to poisoning attacks in which malicious
clients poison the global model via sending malicious model updates to the
server. Existing defenses focus on preventing a small number of malicious
clients from poisoning the global model via robust federated learning methods
and detecting malicious clients when there are a large number of them. However,
it is still an open challenge how to recover the global model from poisoning
attacks after the malicious clients are detected. A naive solution is to remove
the detected malicious clients and train a new global model from scratch, which
incurs large cost that may be intolerable for resource-constrained clients such
as smartphones and IoT devices.
In this work, we propose FedRecover, which can recover an accurate global
model from poisoning attacks with small cost for the clients. Our key idea is
that the server estimates the clients' model updates instead of asking the
clients to compute and communicate them during the recovery process. In
particular, the server stores the global models and clients' model updates in
each round, when training the poisoned global model. During the recovery
process, the server estimates a client's model update in each round using its
stored historical information. Moreover, we further optimize FedRecover to
recover a more accurate global model using warm-up, periodic correction,
abnormality fixing, and final tuning strategies, in which the server asks the
clients to compute and communicate their exact model updates. Theoretically, we
show that the global model recovered by FedRecover is close to or the same as
that recovered by train-from-scratch under some assumptions. Empirically, our
evaluation on four datasets, three federated learning methods, as well as
untargeted and targeted poisoning attacks (e.g., backdoor attacks) shows that
FedRecover is both accurate and efficient.
Related papers
- Towards Efficient and Certified Recovery from Poisoning Attacks in
Federated Learning [17.971060689461883]
Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients manipulate their updates to affect the global model.
In this paper, we show that highly effective recovery can still be achieved based on (i) selective historical information.
We introduce Crab, an efficient and certified recovery method, which relies on selective information storage and adaptive model rollback.
arXiv Detail & Related papers (2024-01-16T09:02:34Z) - FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FLDetector: Defending Federated Learning Against Model Poisoning Attacks
via Detecting Malicious Clients [39.88152764752553]
Federated learning (FL) is vulnerable to model poisoning attacks.
Malicious clients corrupt the global model via sending manipulated model updates to the server.
Our FLDetector aims to detect and remove the majority of the malicious clients.
arXiv Detail & Related papers (2022-07-19T11:44:24Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients [51.973224448076614]
We propose the first Model Poisoning Attack based on Fake clients called MPAF.
MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted.
arXiv Detail & Related papers (2022-03-16T14:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.