User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization
- URL: http://arxiv.org/abs/2003.00229v2
- Date: Fri, 29 Jan 2021 01:39:03 GMT
- Title: User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization
- Authors: Kang Wei, Jun Li, Ming Ding, Chuan Ma, Hang Su, Bo Zhang and H.
Vincent Poor
- Abstract summary: Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
- Score: 77.43075255745389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL), as a type of collaborative machine learning
framework, is capable of preserving private data from mobile terminals (MTs)
while training the data into useful models. Nevertheless, from a viewpoint of
information theory, it is still possible for a curious server to infer private
information from the shared models uploaded by MTs. To address this problem, we
first make use of the concept of local differential privacy (LDP), and propose
a user-level differential privacy (UDP) algorithm by adding artificial noise to
the shared models before uploading them to servers. According to our analysis,
the UDP framework can realize $(\epsilon_{i}, \delta_{i})$-LDP for the $i$-th
MT with adjustable privacy protection levels by varying the variances of the
artificial noise processes. We then derive a theoretical convergence
upper-bound for the UDP algorithm. It reveals that there exists an optimal
number of communication rounds to achieve the best learning performance. More
importantly, we propose a communication rounds discounting (CRD) method.
Compared with the heuristic search method, the proposed CRD method can achieve
a much better trade-off between the computational complexity of searching and
the convergence performance. Extensive experiments indicate that our UDP
algorithm using the proposed CRD method can effectively improve both the
training efficiency and model quality for the given privacy protection levels.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Balancing Privacy and Performance for Private Federated Learning
Algorithms [4.681076651230371]
Federated learning (FL) is a distributed machine learning framework where multiple clients collaborate to train a model without exposing their private data.
FL algorithms frequently employ a differential privacy mechanism that introduces noise into each client's model updates before sharing.
We show that an optimal balance exists between the number of local steps and communication rounds, one that maximizes the convergence performance within a given privacy budget.
arXiv Detail & Related papers (2023-04-11T10:42:11Z) - DPP-based Client Selection for Federated Learning with Non-IID Data [97.1195165400568]
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL)
We first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training.
We leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP$3$S)
arXiv Detail & Related papers (2023-03-30T13:14:54Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - OpBoost: A Vertical Federated Tree Boosting Framework Based on
Order-Preserving Desensitization [26.386265547513887]
Vertical Federated Learning (FL) is a new paradigm that enables users with non-overlapping attributes of the same data samples to jointly train a model without sharing the raw data.
Recent works show that it's still not sufficient to prevent privacy leakage from the training process or the trained model.
This paper focuses on studying the privacy-preserving tree boosting algorithms under the vertical FL.
arXiv Detail & Related papers (2022-10-04T02:21:18Z) - Federated Stochastic Primal-dual Learning with Differential Privacy [15.310299472656533]
We propose a new federated primal-dual algorithm with differential privacy (FedSPDDP)
Our analysis shows that the data sampling strategy and PCP can enhance the data privacy whereas the larger number of local SGD steps could increase privacy leakage.
Experiment results are presented to evaluate the practical performance of the proposed algorithm.
arXiv Detail & Related papers (2022-04-26T13:10:37Z) - Differentially Private Federated Learning via Inexact ADMM with Multiple
Local Updates [0.0]
We develop a DP inexact alternating direction method of multipliers algorithm with multiple local updates for federated learning.
We show that our algorithm provides $barepsilon$-DP for every iteration, where $barepsilon$ is a privacy budget controlled by the user.
We demonstrate that our algorithm reduces the testing error by at most $31%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2022-02-18T19:58:47Z) - Differentially Private Federated Learning via Inexact ADMM [0.0]
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks.
We develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of trust-region subproblems.
Our algorithm reduces the testing error by at most $22%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2021-06-11T02:28:07Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.