DPAUC: Differentially Private AUC Computation in Federated Learning
- URL: http://arxiv.org/abs/2208.12294v1
- Date: Thu, 25 Aug 2022 18:29:11 GMT
- Title: DPAUC: Differentially Private AUC Computation in Federated Learning
- Authors: Jiankai Sun and Xin Yang and Yuanshun Yao and Junyuan Xie and Di Wu
and Chong Wang
- Abstract summary: Federated learning (FL) has gained significant attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple participants.
We propose an evaluation algorithm that can accurately compute the widely used AUC (area under the curve) metric when using the label differential privacy (DP) in FL.
- Score: 21.692648490368327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has gained significant attention recently as a
privacy-enhancing tool to jointly train a machine learning model by multiple
participants. The prior work on FL has mostly studied how to protect label
privacy during model training. However, model evaluation in FL might also lead
to potential leakage of private label information. In this work, we propose an
evaluation algorithm that can accurately compute the widely used AUC (area
under the curve) metric when using the label differential privacy (DP) in FL.
Through extensive experiments, we show our algorithms can compute accurate AUCs
compared to the ground truth.
Related papers
- Privacy-Preserving Federated Learning via Dataset Distillation [9.60829979241686]
Federated Learning (FL) allows users to share knowledge instead of raw data to train a model with high accuracy.
During the training, users lose control over the knowledge shared, which causes serious data privacy issues.
This work proposes FLiP, which aims to bring the principle of least privilege (PoLP) to FL training.
arXiv Detail & Related papers (2024-10-25T13:20:40Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - Differentially Private AUC Computation in Vertical Federated Learning [21.692648490368327]
We propose two evaluation algorithms that can more accurately compute the widely used AUC (area under curve) metric when using label DP in vFL.
Through extensive experiments, we show our algorithms can achieve more accurate AUCs compared to the baselines.
arXiv Detail & Related papers (2022-05-24T23:46:21Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - APPFL: Open-Source Software Framework for Privacy-Preserving Federated
Learning [0.0]
Federated learning (FL) enables training models at different sites and updating the weights from the training instead of transferring data to a central location and training as in classical machine learning.
We introduce APPFL, the Argonne Privacy-Preserving Federated Learning framework.
APPFL allows users to leverage implemented privacy-preserving algorithms, implement new algorithms, and simulate and deploy various FL algorithms with privacy-preserving techniques.
arXiv Detail & Related papers (2022-02-08T06:23:05Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Hybrid Differentially Private Federated Learning on Vertically
Partitioned Data [41.7896466307821]
We present HDP-VFL, the first hybrid differentially private (DP) framework for vertical federated learning (VFL)
We analyze how VFL's intermediate result (IR) can leak private information of the training data during communication.
We mathematically prove that our algorithm not only provides utility guarantees for VFL, but also offers multi-level privacy.
arXiv Detail & Related papers (2020-09-06T16:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.