Mitigating Cross-client GANs-based Attack in Federated Learning
- URL: http://arxiv.org/abs/2307.13314v1
- Date: Tue, 25 Jul 2023 08:15:55 GMT
- Title: Mitigating Cross-client GANs-based Attack in Federated Learning
- Authors: Hong Huang and Xinyu Lei and Tao Xiang
- Abstract summary: Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
- Score: 78.06700142712353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning makes multimedia data (e.g., images) more attractive,
however, multimedia data is usually distributed and privacy sensitive. Multiple
distributed multimedia clients can resort to federated learning (FL) to jointly
learn a global shared model without requiring to share their private samples
with any third-party entities. In this paper, we show that FL suffers from the
cross-client generative adversarial networks (GANs)-based (C-GANs) attack, in
which a malicious client (i.e., adversary) can reconstruct samples with the
same distribution as the training samples from other clients (i.e., victims).
Since a benign client's data can be leaked to the adversary, this attack brings
the risk of local data leakage for clients in many security-critical FL
applications. Thus, we propose Fed-EDKD (i.e., Federated Ensemble Data-free
Knowledge Distillation) technique to improve the current popular FL schemes to
resist C-GANs attack. In Fed-EDKD, each client submits a local model to the
server for obtaining an ensemble global model. Then, to avoid model expansion,
Fed-EDKD adopts data-free knowledge distillation techniques to transfer
knowledge from the ensemble global model to a compressed model. By this way,
Fed-EDKD reduces the adversary's control capability over the global model, so
Fed-EDKD can effectively mitigate C-GANs attack. Finally, the experimental
results demonstrate that Fed-EDKD significantly mitigates C-GANs attack while
only incurring a slight accuracy degradation of FL.
Related papers
- Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling [6.260747047974035]
Federated Learning (FL) enables clients to train a joint model without disclosing their local data.
Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record.
We propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model.
arXiv Detail & Related papers (2024-11-10T13:17:11Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FedDefender: Backdoor Attack Defense in Federated Learning [0.0]
Federated Learning (FL) is a privacy-preserving distributed machine learning technique.
We propose FedDefender, a defense mechanism against targeted poisoning attacks in FL.
arXiv Detail & Related papers (2023-07-02T03:40:04Z) - Personalized Privacy-Preserving Framework for Cross-Silo Federated
Learning [0.0]
Federated learning (FL) is a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data.
In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL)
Our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2023-02-22T07:24:08Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.