AFLGuard: Byzantine-robust Asynchronous Federated Learning
- URL: http://arxiv.org/abs/2212.06325v1
- Date: Tue, 13 Dec 2022 02:07:58 GMT
- Title: AFLGuard: Byzantine-robust Asynchronous Federated Learning
- Authors: Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley
- Abstract summary: Asynchronous FL aims to enable the server to update the model once any client's model update reaches it without waiting for other clients' model updates.
Asynchronous FL is also vulnerable to poisoning attacks, in which malicious clients manipulate the model via poisoning their local data and/or model updates sent to the server.
We propose AFLGuard, a Byzantine-robust asynchronous FL method.
- Score: 41.47838381772442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging machine learning paradigm, in which
clients jointly learn a model with the help of a cloud server. A fundamental
challenge of FL is that the clients are often heterogeneous, e.g., they have
different computing powers, and thus the clients may send model updates to the
server with substantially different delays. Asynchronous FL aims to address
this challenge by enabling the server to update the model once any client's
model update reaches it without waiting for other clients' model updates.
However, like synchronous FL, asynchronous FL is also vulnerable to poisoning
attacks, in which malicious clients manipulate the model via poisoning their
local data and/or model updates sent to the server. Byzantine-robust FL aims to
defend against poisoning attacks. In particular, Byzantine-robust FL can learn
an accurate model even if some clients are malicious and have Byzantine
behaviors. However, most existing studies on Byzantine-robust FL focused on
synchronous FL, leaving asynchronous FL largely unexplored. In this work, we
bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL
method. We show that, both theoretically and empirically, AFLGuard is robust
against various existing and adaptive poisoning attacks (both untargeted and
targeted). Moreover, AFLGuard outperforms existing Byzantine-robust
asynchronous FL methods.
Related papers
- Do We Really Need to Design New Byzantine-robust Aggregation Rules? [9.709243052112921]
Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server.
The decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model.
We present FoundationFL, a novel defense mechanism against poisoning attacks.
arXiv Detail & Related papers (2025-01-29T02:28:03Z) - BlindFL: Segmented Federated Learning with Fully Homomorphic Encryption [0.0]
Federated learning (FL) is a privacy-preserving edge-to-cloud technique used for training and deploying AI models on edge devices.
BlindFL is a framework for global model aggregation in which clients encrypt and send a subset of their local model update.
BlindFL significantly impedes client-side model poisoning attacks, a first for single-key, FHE-based FL schemes.
arXiv Detail & Related papers (2025-01-20T18:42:21Z) - Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective [65.65471972217814]
federated recommendation (FR) based on federated learning (FL) emerges, keeping the personal data on the local client and updating a model collaboratively.
FR has a unique sparse aggregation mechanism, where the embedding of each item is updated by only partial clients, instead of full clients in a dense aggregation of general FL.
In this paper, we reformulate the Byzantine robustness under sparse aggregation by defining the aggregation for a single item as the smallest execution unit.
We propose a family of effective attack strategies, named Spattack, which exploit the vulnerability in sparse aggregation and are categorized along the adversary's knowledge and capability.
arXiv Detail & Related papers (2025-01-06T15:19:26Z) - Securing Federated Learning Against Novel and Classic Backdoor Threats During Foundation Model Integration [8.191214701984162]
Federated learning (FL) enables decentralized model training while preserving privacy.
Recently, integrating Foundation Models (FMs) into FL has boosted performance but also introduced a novel backdoor attack mechanism.
We propose a novel data-free defense strategy by constraining abnormal activations in the hidden feature space during model aggregation on the server.
arXiv Detail & Related papers (2024-10-23T05:54:41Z) - Asynchronous Byzantine Federated Learning [4.6792910030704515]
Federated learning (FL) enables a set of geographically distributed clients to collectively train a model through a server.
Our solution is one of the first Byzantine-resilient and asynchronous FL algorithms.
We compare the performance of our solution with state-of-the-art algorithms on both image and text datasets.
arXiv Detail & Related papers (2024-06-03T15:29:38Z) - FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models [2.7539214125526534]
Federated Learning (FL) thrives in training a global model with numerous clients.
Recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model.
We propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates.
arXiv Detail & Related papers (2024-03-05T10:36:27Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
for Language Models [58.631918656336005]
We propose a novel attack that reveals private user text by deploying malicious parameter vectors.
Unlike previous attacks on FL, the attack exploits characteristics of both the Transformer architecture and the token embedding.
arXiv Detail & Related papers (2022-01-29T22:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.