FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models
- URL: http://arxiv.org/abs/2403.02846v1
- Date: Tue, 5 Mar 2024 10:36:27 GMT
- Title: FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive
Models
- Authors: Younghan Lee, Yungi Cho, Woorim Han, Ho Bae, and Yunheung Paek
- Abstract summary: Federated Learning (FL) thrives in training a global model with numerous clients.
Recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model.
We propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates.
- Score: 2.7539214125526534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) thrives in training a global model with numerous
clients by only sharing the parameters of their local models trained with their
private training datasets. Therefore, without revealing the private dataset,
the clients can obtain a deep learning (DL) model with high performance.
However, recent research proposed poisoning attacks that cause a catastrophic
loss in the accuracy of the global model when adversaries, posed as benign
clients, are present in a group of clients. Therefore, recent studies suggested
byzantine-robust FL methods that allow the server to train an accurate global
model even with the adversaries present in the system. However, many existing
methods require the knowledge of the number of malicious clients or the
auxiliary (clean) dataset or the effectiveness reportedly decreased hugely when
the private dataset was non-independently and identically distributed
(non-IID). In this work, we propose FLGuard, a novel byzantine-robust FL method
that detects malicious clients and discards malicious local updates by
utilizing the contrastive learning technique, which showed a tremendous
improvement as a self-supervised learning method. With contrastive models, we
design FLGuard as an ensemble scheme to maximize the defensive capability. We
evaluate FLGuard extensively under various poisoning attacks and compare the
accuracy of the global model with existing byzantine-robust FL methods. FLGuard
outperforms the state-of-the-art defense methods in most cases and shows
drastic improvement, especially in non-IID settings.
https://github.com/201younghanlee/FLGuard
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks [12.580891810557482]
Federated learning (FL) is attractive for pulling privacy-preserving distributed training data.
We propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model.
We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks.
arXiv Detail & Related papers (2023-09-19T13:31:33Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Towards Understanding Quality Challenges of the Federated Learning: A
First Look from the Lens of Robustness [4.822471415125479]
Federated learning (FL) aims to preserve users' data privacy while leveraging the entire dataset of all participants for training.
FL still tends to suffer from quality issues such as attacks or byzantine faults.
This paper investigates the effectiveness of state-of-the-art (SOTA) robust FL techniques in the presence of attacks and faults.
arXiv Detail & Related papers (2022-01-05T02:06:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.