Federated Stochastic Primal-dual Learning with Differential Privacy
- URL: http://arxiv.org/abs/2204.12284v1
- Date: Tue, 26 Apr 2022 13:10:37 GMT
- Title: Federated Stochastic Primal-dual Learning with Differential Privacy
- Authors: Yiwei Li, Shuai Wang, Tsung-Hui Chang, and Chong-Yung Chi
- Abstract summary: We propose a new federated primal-dual algorithm with differential privacy (FedSPDDP)
Our analysis shows that the data sampling strategy and PCP can enhance the data privacy whereas the larger number of local SGD steps could increase privacy leakage.
Experiment results are presented to evaluate the practical performance of the proposed algorithm.
- Score: 15.310299472656533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a new paradigm that enables many clients to
jointly train a machine learning (ML) model under the orchestration of a
parameter server while keeping the local data not being exposed to any third
party. However, the training of FL is an interactive process between local
clients and the parameter server. Such process would cause privacy leakage
since adversaries may retrieve sensitive information by analyzing the overheard
messages. In this paper, we propose a new federated stochastic primal-dual
algorithm with differential privacy (FedSPD-DP). Compared to the existing
methods, the proposed FedSPD-DP incorporates local stochastic gradient descent
(local SGD) and partial client participation (PCP) for addressing the issues of
communication efficiency and straggler effects due to randomly accessed
clients. Our analysis shows that the data sampling strategy and PCP can enhance
the data privacy whereas the larger number of local SGD steps could increase
privacy leakage, revealing a non-trivial tradeoff between algorithm
communication efficiency and privacy protection. Specifically, we show that, by
guaranteeing $(\epsilon, \delta)$-DP for each client per communication round,
the proposed algorithm guarantees $(\mathcal{O}(q\epsilon \sqrt{p T}),
\delta)$-DP after $T$ communication rounds while maintaining an
$\mathcal{O}(1/\sqrt{pTQ})$ convergence rate for a convex and non-smooth
learning problem, where $Q$ is the number of local SGD steps, $p$ is the client
sampling probability, $q=\max_{i} q_i/\sqrt{1-q_i}$ and $q_i$ is the data
sampling probability of each client under PCP. Experiment results are presented
to evaluate the practical performance of the proposed algorithm and comparison
with state-of-the-art methods.
Related papers
- Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis [14.98493572536424]
In federated learning, it is common to assume that clients are always available to participate in training, which may not be feasible with user devices in practice.
Recent analyze federated learning under more realistic participation patterns as cyclic client availability or arbitrary participation.
arXiv Detail & Related papers (2024-10-30T15:41:35Z) - Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning [21.27813247914949]
We propose Robust-HDP, which efficiently estimates the true noise level in clients model updates.
It improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter to server.
arXiv Detail & Related papers (2024-06-05T17:41:42Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Differentially Private Federated Learning via Inexact ADMM with Multiple
Local Updates [0.0]
We develop a DP inexact alternating direction method of multipliers algorithm with multiple local updates for federated learning.
We show that our algorithm provides $barepsilon$-DP for every iteration, where $barepsilon$ is a privacy budget controlled by the user.
We demonstrate that our algorithm reduces the testing error by at most $31%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2022-02-18T19:58:47Z) - Differentially Private Federated Learning via Inexact ADMM [0.0]
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks.
We develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of trust-region subproblems.
Our algorithm reduces the testing error by at most $22%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2021-06-11T02:28:07Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.