Decentralized Nonconvex Optimization with Guaranteed Privacy and
Accuracy
- URL: http://arxiv.org/abs/2212.07534v1
- Date: Wed, 14 Dec 2022 22:36:13 GMT
- Title: Decentralized Nonconvex Optimization with Guaranteed Privacy and
Accuracy
- Authors: Yongqiang Wang, Tamer Basar
- Abstract summary: Privacy protection and nonity are two challenging problems in decentralized optimization learning sensitive data.
We propose an algorithm that allows both privacy protection and avoidance.
The algorithm is efficient in both communication and computation.
- Score: 34.24521534464185
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Privacy protection and nonconvexity are two challenging problems in
decentralized optimization and learning involving sensitive data. Despite some
recent advances addressing each of the two problems separately, no results have
been reported that have theoretical guarantees on both privacy protection and
saddle/maximum avoidance in decentralized nonconvex optimization. We propose a
new algorithm for decentralized nonconvex optimization that can enable both
rigorous differential privacy and saddle/maximum avoiding performance. The new
algorithm allows the incorporation of persistent additive noise to enable
rigorous differential privacy for data samples, gradients, and intermediate
optimization variables without losing provable convergence, and thus
circumventing the dilemma of trading accuracy for privacy in differential
privacy design. More interestingly, the algorithm is theoretically proven to be
able to efficiently { guarantee accuracy by avoiding} convergence to local
maxima and saddle points, which has not been reported before in the literature
on decentralized nonconvex optimization. The algorithm is efficient in both
communication (it only shares one variable in each iteration) and computation
(it is encryption-free), and hence is promising for large-scale nonconvex
optimization and learning involving high-dimensional optimization parameters.
Numerical experiments for both a decentralized estimation problem and an
Independent Component Analysis (ICA) problem confirm the effectiveness of the
proposed approach.
Related papers
- Quantization Avoids Saddle Points in Distributed Optimization [1.579622195923387]
Distributed non optimization underpins key functionalities of numerous distributed systems.
The aim of this paper is to prove that it can effectively escape saddle points convergence to a second-order stationary point convergence.
With an easily adjustable quantization, the approach allows a user control to aggressively reduce communication overhead.
arXiv Detail & Related papers (2024-03-15T15:58:20Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Quantization enabled Privacy Protection in Decentralized Stochastic
Optimization [34.24521534464185]
Decentralized optimization can be used in areas as diverse as machine learning, control, and sensor networks.
Privacy protection has emerged as a crucial need in the implementation of decentralized optimization.
We propose an algorithm that is able to guarantee provable convergence accuracy even in the presence of aggressive quantization errors.
arXiv Detail & Related papers (2022-08-07T15:17:23Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - New Oracle-Efficient Algorithms for Private Synthetic Data Release [52.33506193761153]
We present three new algorithms for constructing differentially private synthetic data.
The algorithms satisfy differential privacy even in the worst case.
Compared to the state-of-the-art method High-Dimensional Matrix Mechanism citeMcKennaMHM18, our algorithms provide better accuracy in the large workload.
arXiv Detail & Related papers (2020-07-10T15:46:05Z) - Bilevel Optimization for Differentially Private Optimization in Energy
Systems [53.806512366696275]
This paper studies how to apply differential privacy to constrained optimization problems whose inputs are sensitive.
The paper shows that, under a natural assumption, a bilevel model can be solved efficiently for large-scale nonlinear optimization problems.
arXiv Detail & Related papers (2020-01-26T20:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.