Metric Privacy in Federated Learning for Medical Imaging: Improving Convergence and Preventing Client Inference Attacks
- URL: http://arxiv.org/abs/2502.01352v1
- Date: Mon, 03 Feb 2025 13:41:52 GMT
- Title: Metric Privacy in Federated Learning for Medical Imaging: Improving Convergence and Preventing Client Inference Attacks
- Authors: Judith Sáinz-Pardo Díaz, Andreas Athanasiou, Kangsoo Jung, Catuscia Palamidessi, Álvaro López García,
- Abstract summary: Federated learning is a distributed learning technique that allows training a global model with the participation of different data owners.
differential privacy (DP) can be used to privatize the global model by adding noise.
We introduce the notion of metric-privacy to mitigate the impact of classical server side global-DP on the convergence of the aggregated model.
- Score: 5.341901833426653
- License:
- Abstract: Federated learning is a distributed learning technique that allows training a global model with the participation of different data owners without the need to share raw data. This architecture is orchestrated by a central server that aggregates the local models from the clients. This server may be trusted, but not all nodes in the network. Then, differential privacy (DP) can be used to privatize the global model by adding noise. However, this may affect convergence across the rounds of the federated architecture, depending also on the aggregation strategy employed. In this work, we aim to introduce the notion of metric-privacy to mitigate the impact of classical server side global-DP on the convergence of the aggregated model. Metric-privacy is a relaxation of DP, suitable for domains provided with a notion of distance. We apply it from the server side by computing a distance for the difference between the local models. We compare our approach with standard DP by analyzing the impact on six classical aggregation strategies. The proposed methodology is applied to an example of medical imaging and different scenarios are simulated across homogeneous and non-i.i.d clients. Finally, we introduce a novel client inference attack, where a semi-honest client tries to find whether another client participated in the training and study how it can be mitigated using DP and metric-privacy. Our evaluation shows that metric-privacy can increase the performance of the model compared to standard DP, while offering similar protection against client inference attacks.
Related papers
- Optimal Strategies for Federated Learning Maintaining Client Privacy [8.518748080337838]
This paper studies the tradeoff between model performance and communication of the Federated Learning system.
We show that training for one local epoch per global round of training gives optimal performance while preserving the same privacy budget.
arXiv Detail & Related papers (2025-01-24T12:34:38Z) - Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling [6.260747047974035]
Federated Learning (FL) enables clients to train a joint model without disclosing their local data.
Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record.
We propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model.
arXiv Detail & Related papers (2024-11-10T13:17:11Z) - Enhancing Federated Learning with Adaptive Differential Privacy and Priority-Based Aggregation [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
It is possible to access the model updates transferred between clients and servers, potentially revealing sensitive local information to adversaries.
Differential privacy (DP) offers a promising approach to addressing this issue by adding noise to the parameters.
We propose a personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters.
arXiv Detail & Related papers (2024-06-26T16:55:07Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.