Characterizing the Accuracy-Communication-Privacy Trade-off in Distributed Stochastic Convex Optimization
- URL: http://arxiv.org/abs/2501.03222v1
- Date: Mon, 06 Jan 2025 18:57:05 GMT
- Title: Characterizing the Accuracy-Communication-Privacy Trade-off in Distributed Stochastic Convex Optimization
- Authors: Sudeep Salgia, Nikola Pavlovic, Yuejie Chi, Qing Zhao,
- Abstract summary: We consider the problem of differentially private convex optimization (DP-SCO) in a distributed setting with $M$ clients.
The objective is to design an algorithm to minimize a convex population loss using a collaborative effort across $M$ clients.
We establish matching converse and achievability results using a novel lower bound and a new algorithm for distributed DP-SCO.
- Score: 30.45012237666837
- License:
- Abstract: We consider the problem of differentially private stochastic convex optimization (DP-SCO) in a distributed setting with $M$ clients, where each of them has a local dataset of $N$ i.i.d. data samples from an underlying data distribution. The objective is to design an algorithm to minimize a convex population loss using a collaborative effort across $M$ clients, while ensuring the privacy of the local datasets. In this work, we investigate the accuracy-communication-privacy trade-off for this problem. We establish matching converse and achievability results using a novel lower bound and a new algorithm for distributed DP-SCO based on Vaidya's plane cutting method. Thus, our results provide a complete characterization of the accuracy-communication-privacy trade-off for DP-SCO in the distributed setting.
Related papers
- Smoothed Normalization for Efficient Distributed Private Optimization [54.197255548244705]
Federated learning enables machine learning models with privacy of participants.
There is no differentially private distributed method for training, non-feedback problems.
We introduce a new distributed algorithm $alpha$-$sf NormEC$ with provable convergence guarantees.
arXiv Detail & Related papers (2025-02-19T07:10:32Z) - A Federated Distributionally Robust Support Vector Machine with Mixture of Wasserstein Balls Ambiguity Set for Distributed Fault Diagnosis [3.662364375995991]
We study the problem of training a distributionally robust (DR) support vector machine (SVM) in a federated fashion over a network comprised of a central server and $G$ clients without sharing data.
We propose two distributed optimization algorithms for training the global FDR-SVM.
arXiv Detail & Related papers (2024-10-04T19:21:45Z) - Differentially Private Clustered Federated Learning [4.768272342753616]
Federated learning (FL) often incorporates differential privacy (DP) to provide rigorous data privacy guarantees.
Previous works attempted to address high structured data heterogeneity in vanilla FL settings through clustering clients (a.k.a clustered FL)
We propose an algorithm for differentially private clustered FL, which is robust to the DP noise in the system and identifies the underlying clients' clusters correctly.
arXiv Detail & Related papers (2024-05-29T17:03:31Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Differentially Private Distributed Convex Optimization [0.0]
In distributed optimization, multiple agents cooperate to minimize a global objective function, expressed as a sum of local objectives.
Locally stored data are not shared with other agents, which could limit the practical usage of DO in applications with sensitive data.
We propose a privacy-preserving DO algorithm for constrained convex optimization models.
arXiv Detail & Related papers (2023-02-28T12:07:27Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - Federated Stochastic Primal-dual Learning with Differential Privacy [15.310299472656533]
We propose a new federated primal-dual algorithm with differential privacy (FedSPDDP)
Our analysis shows that the data sampling strategy and PCP can enhance the data privacy whereas the larger number of local SGD steps could increase privacy leakage.
Experiment results are presented to evaluate the practical performance of the proposed algorithm.
arXiv Detail & Related papers (2022-04-26T13:10:37Z) - Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD
for Communication Efficient Nonconvex Distributed Learning [58.79085525115987]
Local methods are one of the promising approaches to reduce communication time.
We show that the communication complexity is better than non-local methods when the local datasets is smaller than the smoothness local loss.
arXiv Detail & Related papers (2022-02-12T15:12:17Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.