Decentralized Stochastic Optimization with Inherent Privacy Protection
- URL: http://arxiv.org/abs/2205.03884v1
- Date: Sun, 8 May 2022 14:38:23 GMT
- Title: Decentralized Stochastic Optimization with Inherent Privacy Protection
- Authors: Yongqiang Wang and H. Vincent Poor
- Abstract summary: Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
- Score: 103.62463469366557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized stochastic optimization is the basic building block of modern
collaborative machine learning, distributed estimation and control, and
large-scale sensing. Since involved data usually contain sensitive information
like user locations, healthcare records and financial transactions, privacy
protection has become an increasingly pressing need in the implementation of
decentralized stochastic optimization algorithms. In this paper, we propose a
decentralized stochastic gradient descent algorithm which is embedded with
inherent privacy protection for every participating agent against other
participating agents and external eavesdroppers. This proposed algorithm builds
in a dynamics based gradient-obfuscation mechanism to enable privacy protection
without compromising optimization accuracy, which is in significant difference
from differential-privacy based privacy solutions for decentralized
optimization that have to trade optimization accuracy for privacy. The dynamics
based privacy approach is encryption-free, and hence avoids incurring heavy
communication or computation overhead, which is a common problem with
encryption based privacy solutions for decentralized stochastic optimization.
Besides rigorously characterizing the convergence performance of the proposed
decentralized stochastic gradient descent algorithm under both convex objective
functions and non-convex objective functions, we also provide rigorous
information-theoretic analysis of its strength of privacy protection.
Simulation results for a distributed estimation problem as well as numerical
experiments for decentralized learning on a benchmark machine learning dataset
confirm the effectiveness of the proposed approach.
Related papers
- Differentially private and decentralized randomized power method [15.955127242261808]
We propose a strategy to reduce the variance of the noise introduced to achieve Differential Privacy (DP)
We adapt the method to a decentralized framework with a low computational and communication overhead, while preserving the accuracy.
We show that it is possible to use a noise scale in the decentralized setting that is similar to the one in the centralized setting.
arXiv Detail & Related papers (2024-11-04T09:53:03Z) - Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems [8.419845742978985]
This paper addresses the challenge of preserving privacy in Federated Learning (FL) within centralized systems.
We devise methods that ensure Differential Privacy (DP) while maintaining optimal convergence rates for homogeneous and heterogeneous data distributions.
arXiv Detail & Related papers (2024-07-17T08:19:58Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Decentralized Nonconvex Optimization with Guaranteed Privacy and
Accuracy [34.24521534464185]
Privacy protection and nonity are two challenging problems in decentralized optimization learning sensitive data.
We propose an algorithm that allows both privacy protection and avoidance.
The algorithm is efficient in both communication and computation.
arXiv Detail & Related papers (2022-12-14T22:36:13Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Quantization enabled Privacy Protection in Decentralized Stochastic
Optimization [34.24521534464185]
Decentralized optimization can be used in areas as diverse as machine learning, control, and sensor networks.
Privacy protection has emerged as a crucial need in the implementation of decentralized optimization.
We propose an algorithm that is able to guarantee provable convergence accuracy even in the presence of aggressive quantization errors.
arXiv Detail & Related papers (2022-08-07T15:17:23Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.