Towards Plausible Differentially Private ADMM Based Distributed Machine
Learning
- URL: http://arxiv.org/abs/2008.04500v1
- Date: Tue, 11 Aug 2020 03:40:55 GMT
- Title: Towards Plausible Differentially Private ADMM Based Distributed Machine
Learning
- Authors: Jiahao Ding and Jingyi Wang and Guannan Liang and Jinbo Bi and Miao
Pan
- Abstract summary: We propose a novel (Improved) Plausible differentially Private ADMM algorithm, called PP-ADMM and IPP-ADMM.
Under the same privacy guarantee, the proposed algorithms are superior to the state of the art in terms of model accuracy and convergence rate.
- Score: 27.730535587906168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Alternating Direction Method of Multipliers (ADMM) and its distributed
version have been widely used in machine learning. In the iterations of ADMM,
model updates using local private data and model exchanges among agents impose
critical privacy concerns. Despite some pioneering works to relieve such
concerns, differentially private ADMM still confronts many research challenges.
For example, the guarantee of differential privacy (DP) relies on the premise
that the optimality of each local problem can be perfectly attained in each
ADMM iteration, which may never happen in practice. The model trained by DP
ADMM may have low prediction accuracy. In this paper, we address these concerns
by proposing a novel (Improved) Plausible differentially Private ADMM
algorithm, called PP-ADMM and IPP-ADMM. In PP-ADMM, each agent approximately
solves a perturbed optimization problem that is formulated from its local
private data in an iteration, and then perturbs the approximate solution with
Gaussian noise to provide the DP guarantee. To further improve the model
accuracy and convergence, an improved version IPP-ADMM adopts sparse vector
technique (SVT) to determine if an agent should update its neighbors with the
current perturbed solution. The agent calculates the difference of the current
solution from that in the last iteration, and if the difference is larger than
a threshold, it passes the solution to neighbors; or otherwise the solution
will be discarded. Moreover, we propose to track the total privacy loss under
the zero-concentrated DP (zCDP) and provide a generalization performance
analysis. Experiments on real-world datasets demonstrate that under the same
privacy guarantee, the proposed algorithms are superior to the state of the art
in terms of model accuracy and convergence rate.
Related papers
- AAA: an Adaptive Mechanism for Locally Differential Private Mean Estimation [42.95927712062214]
Local differential privacy (LDP) is a strong privacy standard that has been adopted by popular software systems.
We propose the advanced adaptive additive (AAA) mechanism, which is a distribution-aware approach that addresses the average utility.
We provide rigorous privacy proofs, utility analyses, and extensive experiments comparing AAA with state-of-the-art mechanisms.
arXiv Detail & Related papers (2024-04-02T04:22:07Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Multi-Message Shuffled Privacy in Federated Learning [2.6778110563115542]
We study differentially private distributed optimization under communication constraints.
A server using SGD for optimization aggregates the client-side local gradients for model updates using distributed mean estimation (DME)
We develop a communication-efficient private DME, using the recently developed multi-message shuffled (MMS) privacy framework.
arXiv Detail & Related papers (2023-02-22T05:23:52Z) - Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement
Learning: Adaptivity and Computational Efficiency [90.40062452292091]
We present the first computationally efficient algorithm for linear bandits with heteroscedastic noise.
Our algorithm is adaptive to the unknown variance of noise and achieves an $tildeO(d sqrtsum_k = 1K sigma_k2 + d)$ regret.
We also propose a variance-adaptive algorithm for linear mixture Markov decision processes (MDPs) in reinforcement learning.
arXiv Detail & Related papers (2023-02-21T00:17:24Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Differentially Private ADMM Algorithms for Machine Learning [38.648113004535155]
We study efficient differentially private alternating direction methods of multipliers (ADMM) via gradient perturbation.
We propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(epsilon,delta)$-differential privacy.
arXiv Detail & Related papers (2020-10-31T01:37:24Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Differentially Private ADMM for Convex Distributed Learning: Improved
Accuracy via Multi-Step Approximation [10.742065340992525]
Alternating Direction Method of Multipliers (ADMM) is a popular computation for distributed learning.
When the training data is sensitive, the exchanged iterates will cause serious privacy concern.
We propose a new differentially private distributed ADMM with improved accuracy for a wide range of convex learning problems.
arXiv Detail & Related papers (2020-05-16T07:17:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.