Clients Collaborate: Flexible Differentially Private Federated Learning
with Guaranteed Improvement of Utility-Privacy Trade-off
- URL: http://arxiv.org/abs/2402.07002v1
- Date: Sat, 10 Feb 2024 17:39:34 GMT
- Title: Clients Collaborate: Flexible Differentially Private Federated Learning
with Guaranteed Improvement of Utility-Privacy Trade-off
- Authors: Yuecheng Li, Tong Wang, Chuan Chen, Jian Lou, Bin Chen, Lei Yang,
Zibin Zheng
- Abstract summary: We introduce a novel federated learning framework with rigorous privacy guarantees, named FedCEO, to strike a trade-off between model utility and user privacy.
We show that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space.
It observes significant performance improvements and strict privacy guarantees under different privacy settings.
- Score: 34.2117116062642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To defend against privacy leakage of user data, differential privacy is
widely used in federated learning, but it is not free. The addition of noise
randomly disrupts the semantic integrity of the model and this disturbance
accumulates with increased communication rounds. In this paper, we introduce a
novel federated learning framework with rigorous privacy guarantees, named
FedCEO, designed to strike a trade-off between model utility and user privacy
by letting clients ''Collaborate with Each Other''. Specifically, we perform
efficient tensor low-rank proximal optimization on stacked local model
parameters at the server, demonstrating its capability to flexibly truncate
high-frequency components in spectral space. This implies that our FedCEO can
effectively recover the disrupted semantic information by smoothing the global
semantic space for different privacy settings and continuous training
processes. Moreover, we improve the SOTA utility-privacy trade-off bound by an
order of $\sqrt{d}$, where $d$ is the input dimension. We illustrate our
theoretical results with experiments on representative image datasets. It
observes significant performance improvements and strict privacy guarantees
under different privacy settings.
Related papers
- Share Your Representation Only: Guaranteed Improvement of the
Privacy-Utility Tradeoff in Federated Learning [47.042811490685324]
Mitigating the risk of this information leakage, using state of the art differentially private algorithms, also does not come for free.
In this paper, we consider a representation learning objective that various parties collaboratively refine on a federated model, with differential privacy guarantees.
We observe a significant performance improvement over the prior work under the same small privacy budget.
arXiv Detail & Related papers (2023-09-11T14:46:55Z) - Binary Federated Learning with Client-Level Differential Privacy [7.854806519515342]
Federated learning (FL) is a privacy-preserving collaborative learning framework.
Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm.
We propose a communication-efficient FL training algorithm with differential privacy guarantee.
arXiv Detail & Related papers (2023-08-07T06:07:04Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated $f$-Differential Privacy [19.499120576896228]
Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information.
We introduce federated $f$-differential privacy, a new notion specifically tailored to the federated setting.
We then propose a generic private federated learning framework PriFedSync that accommodates a large family of state-of-the-art FL algorithms.
arXiv Detail & Related papers (2021-02-22T16:28:21Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.