Differentially Private ADMM Algorithms for Machine Learning
- URL: http://arxiv.org/abs/2011.00164v1
- Date: Sat, 31 Oct 2020 01:37:24 GMT
- Title: Differentially Private ADMM Algorithms for Machine Learning
- Authors: Tao Xu, Fanhua Shang, Yuanyuan Liu, Hongying Liu, Longjie Shen, Maoguo
Gong
- Abstract summary: We study efficient differentially private alternating direction methods of multipliers (ADMM) via gradient perturbation.
We propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(epsilon,delta)$-differential privacy.
- Score: 38.648113004535155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study efficient differentially private alternating
direction methods of multipliers (ADMM) via gradient perturbation for many
machine learning problems. For smooth convex loss functions with (non)-smooth
regularization, we propose the first differentially private ADMM (DP-ADMM)
algorithm with performance guarantee of $(\epsilon,\delta)$-differential
privacy ($(\epsilon,\delta)$-DP). From the viewpoint of theoretical analysis,
we use the Gaussian mechanism and the conversion relationship between R\'enyi
Differential Privacy (RDP) and DP to perform a comprehensive privacy analysis
for our algorithm. Then we establish a new criterion to prove the convergence
of the proposed algorithms including DP-ADMM. We also give the utility analysis
of our DP-ADMM. Moreover, we propose an accelerated DP-ADMM (DP-AccADMM) with
the Nesterov's acceleration technique. Finally, we conduct numerical
experiments on many real-world datasets to show the privacy-utility tradeoff of
the two proposed algorithms, and all the comparative analysis shows that
DP-AccADMM converges faster and has a better utility than DP-ADMM, when the
privacy budget $\epsilon$ is larger than a threshold.
Related papers
- Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv Detail & Related papers (2024-05-28T19:02:30Z) - Improved Communication-Privacy Trade-offs in $L_2$ Mean Estimation under Streaming Differential Privacy [47.997934291881414]
Existing mean estimation schemes are usually optimized for $L_infty$ geometry and rely on random rotation or Kashin's representation to adapt to $L$ geometry.
We introduce a novel privacy accounting method for the sparsified Gaussian mechanism that incorporates the randomness inherent in sparsification into the DP.
Unlike previous approaches, our accounting algorithm directly operates in $L$ geometry, yielding MSEs that fast converge to those of the Gaussian mechanism.
arXiv Detail & Related papers (2024-05-02T03:48:47Z) - Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy [23.12198546384976]
Posterior sampling provides $varepsilon$-pure differential privacy guarantees.
It does not suffer from potentially unbounded privacy breach introduced by $(varepsilon,delta)$-approximate DP.
In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo.
arXiv Detail & Related papers (2023-10-23T07:54:39Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - On Private Online Convex Optimization: Optimal Algorithms in
$\ell_p$-Geometry and High Dimensional Contextual Bandits [9.798304879986604]
We study the Differentially private (DP) convex optimization problem with streaming data sampled from a distribution and arrives sequentially.
We also consider the continual release model where parameters related to private information are updated and released upon each new data, often known as the online algorithms.
Our algorithm achieves in linear time the optimal excess risk when $1pleq 2$ and the state-of-the-art excess risk meeting the non-private lower ones when $2pleqinfty$.
arXiv Detail & Related papers (2022-06-16T12:09:47Z) - Differentially Private Federated Learning via Inexact ADMM [0.0]
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks.
We develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of trust-region subproblems.
Our algorithm reduces the testing error by at most $22%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2021-06-11T02:28:07Z) - Towards Plausible Differentially Private ADMM Based Distributed Machine
Learning [27.730535587906168]
We propose a novel (Improved) Plausible differentially Private ADMM algorithm, called PP-ADMM and IPP-ADMM.
Under the same privacy guarantee, the proposed algorithms are superior to the state of the art in terms of model accuracy and convergence rate.
arXiv Detail & Related papers (2020-08-11T03:40:55Z) - Faster Stochastic Alternating Direction Method of Multipliers for
Nonconvex Optimization [110.52708815647613]
In this paper, we propose a faster alternating direction of multipliers (ADMM) for non-integrated optimization by using a new path, called SPADMM.
We prove that the SPADMM achieves a-breaking first-order differential oracle estimator (IFO) for finding a record of an IFO.
Our theoretical analysis shows that the online SPIDER-ADMM has the IFOFO(epsilon) by a factor of $mathcalO(n1)$.
arXiv Detail & Related papers (2020-08-04T02:59:42Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.