Towards Efficient and Certified Recovery from Poisoning Attacks in
Federated Learning
- URL: http://arxiv.org/abs/2401.08216v2
- Date: Fri, 19 Jan 2024 05:31:07 GMT
- Title: Towards Efficient and Certified Recovery from Poisoning Attacks in
Federated Learning
- Authors: Yu Jiang, Jiyuan Shen, Ziyao Liu, Chee Wei Tan, Kwok-Yan Lam
- Abstract summary: Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients manipulate their updates to affect the global model.
In this paper, we show that highly effective recovery can still be achieved based on (i) selective historical information.
We introduce Crab, an efficient and certified recovery method, which relies on selective information storage and adaptive model rollback.
- Score: 17.971060689461883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is vulnerable to poisoning attacks, where malicious
clients manipulate their updates to affect the global model. Although various
methods exist for detecting those clients in FL, identifying malicious clients
requires sufficient model updates, and hence by the time malicious clients are
detected, FL models have been already poisoned. Thus, a method is needed to
recover an accurate global model after malicious clients are identified.
Current recovery methods rely on (i) all historical information from
participating FL clients and (ii) the initial model unaffected by the malicious
clients, leading to a high demand for storage and computational resources. In
this paper, we show that highly effective recovery can still be achieved based
on (i) selective historical information rather than all historical information
and (ii) a historical model that has not been significantly affected by
malicious clients rather than the initial model. In this scenario, while
maintaining comparable recovery performance, we can accelerate the recovery
speed and decrease memory consumption. Following this concept, we introduce
Crab, an efficient and certified recovery method, which relies on selective
information storage and adaptive model rollback. Theoretically, we demonstrate
that the difference between the global model recovered by Crab and the one
recovered by train-from-scratch can be bounded under certain assumptions. Our
empirical evaluation, conducted across three datasets over multiple machine
learning models, and a variety of untargeted and targeted poisoning attacks
reveals that Crab is both accurate and efficient, and consistently outperforms
previous approaches in terms of both recovery speed and memory consumption.
Related papers
- Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense [3.685395311534351]
Federated Learning (FL) is a distributed machine learning diagram that enables multiple clients to collaboratively train a global model without sharing their private local data.
FL systems are vulnerable to attacks that are happening in malicious clients through data poisoning and model poisoning.
Existing defense methods typically focus on mitigating specific types of poisoning and are often ineffective against unseen types of attack.
arXiv Detail & Related papers (2024-08-05T20:27:45Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - SureFED: Robust Federated Learning via Uncertainty-Aware Inward and
Outward Inspection [29.491675102478798]
We introduce SureFED, a novel framework for robust federated learning.
SureFED establishes trust using the local information of benign clients.
We theoretically prove the robustness of our algorithm against data and model poisoning attacks.
arXiv Detail & Related papers (2023-08-04T23:51:05Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - BayBFed: Bayesian Backdoor Defense for Federated Learning [17.433543798151746]
Federated learning (FL) allows participants to jointly train a machine learning model without sharing their private data with others.
BayBFed proposes to utilize probability distributions over client updates to detect malicious updates in FL.
arXiv Detail & Related papers (2023-01-23T16:01:30Z) - FedRecover: Recovering from Poisoning Attacks in Federated Learning
using Historical Information [67.8846134295194]
Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model.
We propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients.
arXiv Detail & Related papers (2022-10-20T00:12:34Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.