Client-side Gradient Inversion Against Federated Learning from Poisoning
- URL: http://arxiv.org/abs/2309.07415v1
- Date: Thu, 14 Sep 2023 03:48:27 GMT
- Title: Client-side Gradient Inversion Against Federated Learning from Poisoning
- Authors: Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Chao Chen, Shirui Pan,
Kok-Leong Ong, Jun Zhang and Yang Xiang
- Abstract summary: Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
- Score: 59.74484221875662
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) enables distributed participants (e.g., mobile
devices) to train a global model without sharing data directly to a central
server. Recent studies have revealed that FL is vulnerable to gradient
inversion attack (GIA), which aims to reconstruct the original training samples
and poses high risk against the privacy of clients in FL. However, most
existing GIAs necessitate control over the server and rely on strong prior
knowledge including batch normalization and data distribution information. In
this work, we propose Client-side poisoning Gradient Inversion (CGI), which is
a novel attack method that can be launched from clients. For the first time, we
show the feasibility of a client-side adversary with limited knowledge being
able to recover the training samples from the aggregated global model. We take
a distinct approach in which the adversary utilizes a malicious model that
amplifies the loss of a specific targeted class of interest. When honest
clients employ the poisoned global model, the gradients of samples belonging to
the targeted class are magnified, making them the dominant factor in the
aggregated update. This enables the adversary to effectively reconstruct the
private input belonging to other clients using the aggregated update. In
addition, our CGI also features its ability to remain stealthy against
Byzantine-robust aggregation rules (AGRs). By optimizing malicious updates and
blending benign updates with a malicious replacement vector, our method remains
undetected by these defense mechanisms. To evaluate the performance of CGI, we
conduct experiments on various benchmark datasets, considering representative
Byzantine-robust AGRs, and exploring diverse FL settings with different levels
of adversary knowledge about the data. Our results demonstrate that CGI
consistently and successfully extracts training input in all tested scenarios.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Overcoming Catastrophic Forgetting in Federated Class-Incremental Learning via Federated Global Twin Generator [13.808765929040677]
Federated Global Twin Generator (FedGTG) is an FCIL framework that exploits privacy-preserving generative-model training on the global side without accessing client data.
We analyze the robustness of FedGTG on natural images, as well as its ability to converge to flat local minima and achieve better-predicting confidence (calibration)
Experimental results on CIFAR-10, CIFAR-100, and tiny-ImageNet demonstrate the improvements in accuracy and forgetting measures of FedGTG compared to previous frameworks.
arXiv Detail & Related papers (2024-07-13T08:23:21Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - FedBayes: A Zero-Trust Federated Learning Aggregation to Defend Against
Adversarial Attacks [1.689369173057502]
Federated learning has created a decentralized method to train a machine learning model without needing direct access to client data.
malicious clients are able to corrupt the global model and degrade performance across all clients within a federation.
Our novel aggregation method, FedBayes, mitigates the effect of a malicious client by calculating the probabilities of a client's model weights.
arXiv Detail & Related papers (2023-12-04T21:37:50Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning [14.134217287912008]
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients.
Recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks.
We present a principled framework for evaluating and comparing different forms of client privacy leakage attacks.
arXiv Detail & Related papers (2020-04-22T05:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.