Client-Level Differential Privacy via Adaptive Intermediary in Federated
Medical Imaging
- URL: http://arxiv.org/abs/2307.12542v2
- Date: Mon, 15 Jan 2024 16:18:13 GMT
- Title: Client-Level Differential Privacy via Adaptive Intermediary in Federated
Medical Imaging
- Authors: Meirui Jiang, Yuan Zhong, Anjie Le, Xiaoxiao Li, Qi Dou
- Abstract summary: Trade-off of differential privacy (DP) between privacy protection and performance is still underexplored for real-world medical scenario.
We propose to optimize the trade-off under the context of client-level DP, which focuses on privacy during communications.
We propose an adaptive intermediary strategy to improve performance without harming privacy.
- Score: 33.494287036763716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent progress in enhancing the privacy of federated learning (FL)
via differential privacy (DP), the trade-off of DP between privacy protection
and performance is still underexplored for real-world medical scenario. In this
paper, we propose to optimize the trade-off under the context of client-level
DP, which focuses on privacy during communications. However, FL for medical
imaging involves typically much fewer participants (hospitals) than other
domains (e.g., mobile devices), thus ensuring clients be differentially private
is much more challenging. To tackle this problem, we propose an adaptive
intermediary strategy to improve performance without harming privacy.
Specifically, we theoretically find splitting clients into sub-clients, which
serve as intermediaries between hospitals and the server, can mitigate the
noises introduced by DP without harming privacy. Our proposed approach is
empirically evaluated on both classification and segmentation tasks using two
public datasets, and its effectiveness is demonstrated with significant
performance improvements and comprehensive analytical studies. Code is
available at: https://github.com/med-air/Client-DP-FL.
Related papers
- Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Prompt-based Personalized Federated Learning for Medical Visual Question
Answering [56.002377299811656]
We present a novel prompt-based personalized federated learning (pFL) method to address data heterogeneity and privacy concerns.
We regard medical datasets from different organs as clients and use pFL to train personalized transformer-based VQA models for each client.
arXiv Detail & Related papers (2024-02-15T03:09:54Z) - Differential Privacy for Adaptive Weight Aggregation in Federated Tumor
Segmentation [0.16746114653388383]
Federated Learning (FL) is a distributed machine learning approach that safeguards privacy by creating an impartial global model while respecting the privacy of individual client data.
We present a differential privacy (DP) federated deep learning framework in medical image segmentation.
We extend our similarity weight aggregation (SimAgg) method to DP-SimAgg algorithm, a differentially private similarity-weighted aggregation algorithm for brain tumor segmentation.
arXiv Detail & Related papers (2023-08-01T21:59:22Z) - A Client-server Deep Federated Learning for Cross-domain Surgical Image
Segmentation [18.402074964118697]
This paper presents a solution to the cross-domain adaptation problem for 2D surgical image segmentation.
Deep learning architectures in medical image analysis necessitate extensive training data for better generalization.
We propose a Client-server deep federated architecture for cross-domain adaptation.
arXiv Detail & Related papers (2023-06-14T19:49:47Z) - Balancing Privacy and Performance for Private Federated Learning
Algorithms [4.681076651230371]
Federated learning (FL) is a distributed machine learning framework where multiple clients collaborate to train a model without exposing their private data.
FL algorithms frequently employ a differential privacy mechanism that introduces noise into each client's model updates before sharing.
We show that an optimal balance exists between the number of local steps and communication rounds, one that maximizes the convergence performance within a given privacy budget.
arXiv Detail & Related papers (2023-04-11T10:42:11Z) - "Am I Private and If So, how Many?" -- Using Risk Communication Formats
for Making Differential Privacy Understandable [0.0]
We adapt risk communication formats in conjunction with a model for the privacy risks of Differential Privacy.
We evaluate these novel privacy communication formats in a crowdsourced study.
arXiv Detail & Related papers (2022-04-08T13:30:07Z) - Federated Learning with Adaptive Batchnorm for Personalized Healthcare [47.52430258876696]
We propose AdaFed to tackle domain shifts and obtain personalized models for local clients.
AdaFed learns the similarity between clients via the statistics of the batch normalization layers.
Experiments on five healthcare benchmarks demonstrate that AdaFed achieves better accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-01T11:36:56Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.