Differentially Private Decentralized Optimization with Relay Communication
- URL: http://arxiv.org/abs/2212.10859v2
- Date: Fri, 13 Dec 2024 07:13:07 GMT
- Title: Differentially Private Decentralized Optimization with Relay Communication
- Authors: Luqing Wang, Luyao Guo, Shaofu Yang, Xinli Shi,
- Abstract summary: We introduce a new measure: Privacy Leakage Frequency (PLF), which reveals the relationship between communication and privacy leakage of algorithms.
A novel differentially private decentralized primal--dual algorithm named DP-RECAL is proposed to take advantage of operator splitting method and relay communication mechanism to experience less PLF.
- Score: 1.2695958417031445
- License:
- Abstract: Security concerns in large-scale networked environments are becoming increasingly critical. To further improve the algorithm security from the design perspective of decentralized optimization algorithms, we introduce a new measure: Privacy Leakage Frequency (PLF), which reveals the relationship between communication and privacy leakage of algorithms, showing that lower PLF corresponds to lower privacy budgets. Based on such assertion, a novel differentially private decentralized primal--dual algorithm named DP-RECAL is proposed to take advantage of operator splitting method and relay communication mechanism to experience less PLF so as to reduce the overall privacy budget. To the best of our knowledge, compared with existing differentially private algorithms, DP-RECAL presents superior privacy performance and communication complexity. In addition, with uncoordinated network-independent stepsizes, we prove the convergence of DP-RECAL for general convex problems and establish a linear convergence rate under the metric subregularity. Evaluation analysis on least squares problem and numerical experiments on real-world datasets verify our theoretical results and demonstrate that DP-RECAL can defend some classical gradient leakage attacks.
Related papers
- Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.
Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.
We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Federated PCA and Estimation for Spiked Covariance Matrices: Optimal Rates and Efficient Algorithm [19.673557166734977]
Federated Learning (FL) has gained significant recent attention in machine learning for its enhanced privacy and data security.
This paper investigates federated PCA and estimation for spiked covariance matrices under distributed differential privacy constraints.
We establish minimax rates of convergence, with a key finding that the central server's optimal rate is the harmonic mean of the local clients' minimax rates.
arXiv Detail & Related papers (2024-11-23T21:57:50Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications [9.045332526072828]
This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.
The proposed solution is further analyzed in terms of errors and errors in two relevant applications.
arXiv Detail & Related papers (2023-06-09T03:51:45Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Over-the-Air Federated Averaging with Limited Power and Privacy Budgets [49.04036552090802]
This paper studies a private over-the-air federated averaging (DP-OTA-FedAvg) system with a limited sum power budget.
We aim to improve the analytical problem to minimize the gap of the DP-OTA-FedAvg coefficient to minimize privacy functions.
arXiv Detail & Related papers (2023-05-05T13:56:40Z) - Balancing Privacy and Performance for Private Federated Learning
Algorithms [4.681076651230371]
Federated learning (FL) is a distributed machine learning framework where multiple clients collaborate to train a model without exposing their private data.
FL algorithms frequently employ a differential privacy mechanism that introduces noise into each client's model updates before sharing.
We show that an optimal balance exists between the number of local steps and communication rounds, one that maximizes the convergence performance within a given privacy budget.
arXiv Detail & Related papers (2023-04-11T10:42:11Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.