Quantization enabled Privacy Protection in Decentralized Stochastic
Optimization
- URL: http://arxiv.org/abs/2208.04845v1
- Date: Sun, 7 Aug 2022 15:17:23 GMT
- Title: Quantization enabled Privacy Protection in Decentralized Stochastic
Optimization
- Authors: Yongqiang Wang, Tamer Basar
- Abstract summary: Decentralized optimization can be used in areas as diverse as machine learning, control, and sensor networks.
Privacy protection has emerged as a crucial need in the implementation of decentralized optimization.
We propose an algorithm that is able to guarantee provable convergence accuracy even in the presence of aggressive quantization errors.
- Score: 34.24521534464185
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: By enabling multiple agents to cooperatively solve a global optimization
problem in the absence of a central coordinator, decentralized stochastic
optimization is gaining increasing attention in areas as diverse as machine
learning, control, and sensor networks. Since the associated data usually
contain sensitive information, such as user locations and personal identities,
privacy protection has emerged as a crucial need in the implementation of
decentralized stochastic optimization. In this paper, we propose a
decentralized stochastic optimization algorithm that is able to guarantee
provable convergence accuracy even in the presence of aggressive quantization
errors that are proportional to the amplitude of quantization inputs. The
result applies to both convex and non-convex objective functions, and enables
us to exploit aggressive quantization schemes to obfuscate shared information,
and hence enables privacy protection without losing provable optimization
accuracy. In fact, by using a {stochastic} ternary quantization scheme, which
quantizes any value to three numerical levels, we achieve quantization-based
rigorous differential privacy in decentralized stochastic optimization, which
has not been reported before. In combination with the presented quantization
scheme, the proposed algorithm ensures, for the first time, rigorous
differential privacy in decentralized stochastic optimization without losing
provable convergence accuracy. Simulation results for a distributed estimation
problem as well as numerical experiments for decentralized learning on a
benchmark machine learning dataset confirm the effectiveness of the proposed
approach.
Related papers
- Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems [8.419845742978985]
This paper addresses the challenge of preserving privacy in Federated Learning (FL) within centralized systems.
We devise methods that ensure Differential Privacy (DP) while maintaining optimal convergence rates for homogeneous and heterogeneous data distributions.
arXiv Detail & Related papers (2024-07-17T08:19:58Z) - Quantization Avoids Saddle Points in Distributed Optimization [1.579622195923387]
Distributed non optimization underpins key functionalities of numerous distributed systems.
The aim of this paper is to prove that it can effectively escape saddle points convergence to a second-order stationary point convergence.
With an easily adjustable quantization, the approach allows a user control to aggressively reduce communication overhead.
arXiv Detail & Related papers (2024-03-15T15:58:20Z) - Adaptive Differentially Quantized Subspace Perturbation (ADQSP): A Unified Framework for Privacy-Preserving Distributed Average Consensus [6.364764301218972]
We propose a general approach named adaptive differentially quantized subspace (ADQSP)
We show that by varying a single quantization parameter the proposed method can vary between SMPC-type performances and DP-type performances.
Our results show the potential of exploiting traditional distributed signal processing tools for providing cryptographic guarantees.
arXiv Detail & Related papers (2023-12-13T07:52:16Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Decentralized Nonconvex Optimization with Guaranteed Privacy and
Accuracy [34.24521534464185]
Privacy protection and nonity are two challenging problems in decentralized optimization learning sensitive data.
We propose an algorithm that allows both privacy protection and avoidance.
The algorithm is efficient in both communication and computation.
arXiv Detail & Related papers (2022-12-14T22:36:13Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.