Client-Level Differential Privacy via Adaptive Intermediary in Federated
Medical Imaging
- URL: http://arxiv.org/abs/2307.12542v2
- Date: Mon, 15 Jan 2024 16:18:13 GMT
- Title: Client-Level Differential Privacy via Adaptive Intermediary in Federated
Medical Imaging
- Authors: Meirui Jiang, Yuan Zhong, Anjie Le, Xiaoxiao Li, Qi Dou
- Abstract summary: Trade-off of differential privacy (DP) between privacy protection and performance is still underexplored for real-world medical scenario.
We propose to optimize the trade-off under the context of client-level DP, which focuses on privacy during communications.
We propose an adaptive intermediary strategy to improve performance without harming privacy.
- Score: 33.494287036763716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent progress in enhancing the privacy of federated learning (FL)
via differential privacy (DP), the trade-off of DP between privacy protection
and performance is still underexplored for real-world medical scenario. In this
paper, we propose to optimize the trade-off under the context of client-level
DP, which focuses on privacy during communications. However, FL for medical
imaging involves typically much fewer participants (hospitals) than other
domains (e.g., mobile devices), thus ensuring clients be differentially private
is much more challenging. To tackle this problem, we propose an adaptive
intermediary strategy to improve performance without harming privacy.
Specifically, we theoretically find splitting clients into sub-clients, which
serve as intermediaries between hospitals and the server, can mitigate the
noises introduced by DP without harming privacy. Our proposed approach is
empirically evaluated on both classification and segmentation tasks using two
public datasets, and its effectiveness is demonstrated with significant
performance improvements and comprehensive analytical studies. Code is
available at: https://github.com/med-air/Client-DP-FL.
Related papers
- Federated Learning With Individualized Privacy Through Client Sampling [2.0432201743624456]
We propose an adapted method for enabling Individualized Differential Privacy (IDP) in Federated Learning (FL)
We calculate client-specific sampling rates based on their heterogeneous privacy budgets and integrate them into a modified IDP-FedAvg algorithm.
The experimental results demonstrate that our approach achieves clear improvements over uniform DP baselines, reducing the trade-off between privacy and utility.
arXiv Detail & Related papers (2025-01-29T13:11:21Z) - Can large language models be privacy preserving and fair medical coders? [13.49769820767045]
Differential privacy (DP) is a common method for preserving privacy in such settings.
We examine two key trade-offs in applying DP to the NLP task of medical coding (ICD classification)
arXiv Detail & Related papers (2024-12-07T04:27:05Z) - Towards Privacy-Preserving Medical Imaging: Federated Learning with Differential Privacy and Secure Aggregation Using a Modified ResNet Architecture [0.0]
This research introduces a federated learning framework that combines local differential privacy and secure aggregation.
We also propose DPResNet, a modified ResNet architecture optimized for differential privacy.
arXiv Detail & Related papers (2024-12-01T05:52:29Z) - Federated Instruction Tuning of LLMs with Domain Coverage Augmentation [87.49293964617128]
Federated Domain-specific Instruction Tuning (FedDIT) utilizes limited cross-client private data together with various strategies of instruction augmentation.
We propose FedDCA, which optimize domain coverage through greedy client center selection and retrieval-based augmentation.
For client-side computational efficiency and system scalability, FedDCA$*$, the variant of FedDCA, utilizes heterogeneous encoders with server-side feature alignment.
arXiv Detail & Related papers (2024-09-30T09:34:31Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Prompt-based Personalized Federated Learning for Medical Visual Question
Answering [56.002377299811656]
We present a novel prompt-based personalized federated learning (pFL) method to address data heterogeneity and privacy concerns.
We regard medical datasets from different organs as clients and use pFL to train personalized transformer-based VQA models for each client.
arXiv Detail & Related papers (2024-02-15T03:09:54Z) - Differential Privacy for Adaptive Weight Aggregation in Federated Tumor
Segmentation [0.16746114653388383]
Federated Learning (FL) is a distributed machine learning approach that safeguards privacy by creating an impartial global model while respecting the privacy of individual client data.
We present a differential privacy (DP) federated deep learning framework in medical image segmentation.
We extend our similarity weight aggregation (SimAgg) method to DP-SimAgg algorithm, a differentially private similarity-weighted aggregation algorithm for brain tumor segmentation.
arXiv Detail & Related papers (2023-08-01T21:59:22Z) - "Am I Private and If So, how Many?" -- Using Risk Communication Formats
for Making Differential Privacy Understandable [0.0]
We adapt risk communication formats in conjunction with a model for the privacy risks of Differential Privacy.
We evaluate these novel privacy communication formats in a crowdsourced study.
arXiv Detail & Related papers (2022-04-08T13:30:07Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.