CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness
- URL: http://arxiv.org/abs/2409.13133v1
- Date: Fri, 20 Sep 2024 00:23:44 GMT
- Title: CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness
- Authors: Hojat Allah Salehi, Md Jueal Mia, S. Sandeep Pradhan, M. Hadi Amini, Farhad Shirani,
- Abstract summary: Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
- Score: 6.881974834597426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has emerged as a promising framework for distributed machine learning. It enables collaborative learning among multiple clients, utilizing distributed data and computing resources. However, FL faces challenges in balancing privacy guarantees, communication efficiency, and overall model accuracy. In this work, we introduce CorBin-FL, a privacy mechanism that uses correlated binary stochastic quantization to achieve differential privacy while maintaining overall model accuracy. The approach uses secure multi-party computation techniques to enable clients to perform correlated quantization of their local model updates without compromising individual privacy. We provide theoretical analysis showing that CorBin-FL achieves parameter-level local differential privacy (PLDP), and that it asymptotically optimizes the privacy-utility trade-off between the mean square error utility measure and the PLDP privacy measure. We further propose AugCorBin-FL, an extension that, in addition to PLDP, achieves user-level and sample-level central differential privacy guarantees. For both mechanisms, we derive bounds on privacy parameters and mean squared error performance measures. Extensive experiments on MNIST and CIFAR10 datasets demonstrate that our mechanisms outperform existing differentially private FL mechanisms, including Gaussian and Laplacian mechanisms, in terms of model accuracy under equal PLDP privacy budgets.
Related papers
- Immersion and Invariance-based Coding for Privacy-Preserving Federated Learning [1.4226399196408985]
Federated learning (FL) has emerged as a method to preserve privacy in collaborative distributed learning.
We introduce a privacy-preserving FL framework that combines differential privacy and system immersion tools from control theory.
We demonstrate that the proposed privacy-preserving scheme can be tailored to offer any desired level of differential privacy for both local and global model parameters.
arXiv Detail & Related papers (2024-09-25T15:04:42Z) - Mitigating Disparate Impact of Differential Privacy in Federated Learning through Robust Clustering [4.768272342753616]
Federated Learning (FL) is a decentralized machine learning (ML) approach that keeps data localized and often incorporates Differential Privacy (DP) to enhance privacy guarantees.
Recent work has attempted to address performance fairness in vanilla FL through clustering, but this method remains sensitive and prone to errors.
We propose a novel clustered DPFL algorithm designed to effectively identify clients' clusters in highly heterogeneous settings.
arXiv Detail & Related papers (2024-05-29T17:03:31Z) - QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution [1.565361244756411]
Federated learning (FL) is a framework which allows multiple users to jointly train a global machine learning (ML) model.
One key motivation of such distributed frameworks is to provide privacy guarantees to the users.
We present a novel quantization method, utilizing a mixed geometric distribution to introduce the randomness needed to provide DP.
arXiv Detail & Related papers (2023-12-10T04:44:53Z) - Binary Federated Learning with Client-Level Differential Privacy [7.854806519515342]
Federated learning (FL) is a privacy-preserving collaborative learning framework.
Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm.
We propose a communication-efficient FL training algorithm with differential privacy guarantee.
arXiv Detail & Related papers (2023-08-07T06:07:04Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Balancing Privacy and Performance for Private Federated Learning
Algorithms [4.681076651230371]
Federated learning (FL) is a distributed machine learning framework where multiple clients collaborate to train a model without exposing their private data.
FL algorithms frequently employ a differential privacy mechanism that introduces noise into each client's model updates before sharing.
We show that an optimal balance exists between the number of local steps and communication rounds, one that maximizes the convergence performance within a given privacy budget.
arXiv Detail & Related papers (2023-04-11T10:42:11Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.