Federated Learning Nodes Can Reconstruct Peers' Image Data
- URL: http://arxiv.org/abs/2410.04661v1
- Date: Mon, 7 Oct 2024 00:18:35 GMT
- Title: Federated Learning Nodes Can Reconstruct Peers' Image Data
- Authors: Ethan Wilson, Kai Yue, Chau-Wai Wong, Huaiyu Dai,
- Abstract summary: Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data.
Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from an honest-but-curious central server.
We show that an honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk.
- Score: 27.92271597111756
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data and periodically average weight updates to benefit from other nodes' training. Each node's goal is to collaborate with other nodes to improve the model's performance while keeping its training data private. However, this framework does not guarantee data privacy. Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from an honest-but-curious central server. In this work, we show that an honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk. We demonstrate that a single client can silently reconstruct other clients' private images using diluted information available within consecutive updates. We leverage state-of-the-art diffusion models to enhance the perceptual quality and recognizability of the reconstructed images, further demonstrating the risk of information leakage at a semantic level. This highlights the need for more robust privacy-preserving mechanisms that protect against silent client-side attacks during federated training.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - No Vandalism: Privacy-Preserving and Byzantine-Robust Federated Learning [18.1129191782913]
Federated learning allows several clients to train one machine learning model jointly without sharing private data, providing privacy protection.
Traditional federated learning is vulnerable to poisoning attacks, which can not only decrease the model performance, but also implant malicious backdoors.
In this paper, we aim to build a privacy-preserving and Byzantine-robust federated learning scheme to provide an environment with no vandalism (NoV) against attacks from malicious participants.
arXiv Detail & Related papers (2024-06-03T07:59:10Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - PPIDSG: A Privacy-Preserving Image Distribution Sharing Scheme with GAN
in Federated Learning [2.0507547735926424]
Federated learning (FL) has attracted growing attention since it allows for privacy-preserving collaborative training on decentralized clients.
Recent works have revealed that it still has the risk of exposing private data to adversaries.
We propose a privacy-preserving image distribution sharing scheme with GAN (PPIDSG)
arXiv Detail & Related papers (2023-12-16T08:32:29Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Compression Boosts Differentially Private Federated Learning [0.7742297876120562]
Federated learning allows distributed entities to train a common model collaboratively without sharing their own data.
It remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients.
We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.
arXiv Detail & Related papers (2020-11-10T13:11:03Z) - Inverting Gradients -- How easy is it to break privacy in federated
learning? [13.632998588216523]
federated learning is designed to collaboratively train a neural network on a server.
Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data.
Previous attacks have provided a false sense of security, by succeeding only in contrived settings.
We show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients.
arXiv Detail & Related papers (2020-03-31T09:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.