On the Differentially Private Nature of Perturbed Gradient Descent
- URL: http://arxiv.org/abs/2101.06847v1
- Date: Mon, 18 Jan 2021 02:29:37 GMT
- Title: On the Differentially Private Nature of Perturbed Gradient Descent
- Authors: Thulasi Tholeti, Sheetal Kalyani
- Abstract summary: We consider the problem of empirical minimization given a database, using the gradient descent algorithm.
A gradient descent algorithm is typically employed to escape the differential saddle points.
- Score: 15.554148012395457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of empirical risk minimization given a database,
using the gradient descent algorithm. We note that the function to be optimized
may be non-convex, consisting of saddle points which impede the convergence of
the algorithm. A perturbed gradient descent algorithm is typically employed to
escape these saddle points. We show that this algorithm, that perturbs the
gradient, inherently preserves the privacy of the data. We then employ the
differential privacy framework to quantify the privacy hence achieved. We also
analyze the change in privacy with varying parameters such as problem dimension
and the distance between the databases.
Related papers
- Improved Differentially Private Regression via Gradient Boosting [38.09948758699131]
We give a new algorithm for private linear regression based on gradient boosting.
In addition to a comprehensive set of experiments, we give theoretical insights to explain this behavior.
arXiv Detail & Related papers (2023-03-06T19:20:52Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Private Domain Adaptation from a Public Source [48.83724068578305]
We design differentially private discrepancy-based algorithms for adaptation from a source domain with public labeled data to a target domain with unlabeled private data.
Our solutions are based on private variants of Frank-Wolfe and Mirror-Descent algorithms.
arXiv Detail & Related papers (2022-08-12T06:52:55Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for
Private Learning [74.73901662374921]
A differentially private model degrades the utility drastically when the model comprises a large number of trainable parameters.
We propose an algorithm emphGradient Embedding Perturbation (GEP) towards training differentially private deep models with decent accuracy.
arXiv Detail & Related papers (2021-02-25T04:29:58Z) - Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient
Descent [10.409652277630132]
We model the dynamics of privacy loss in Langevin diffusion and extend it to the noisy gradient descent algorithm.
We prove that the privacy loss converges exponentially fast.
arXiv Detail & Related papers (2021-02-11T05:49:37Z) - Privacy Analysis of Online Learning Algorithms via Contraction
Coefficients [5.333582981327498]
We show that differential privacy guarantees can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for $f$-divergences.
We also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.
arXiv Detail & Related papers (2020-12-20T22:02:15Z) - Privacy Preserving Recalibration under Domain Shift [119.21243107946555]
We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
arXiv Detail & Related papers (2020-08-21T18:43:37Z) - Carath\'eodory Sampling for Stochastic Gradient Descent [79.55586575988292]
We present an approach that is inspired by classical results of Tchakaloff and Carath'eodory about measure reduction.
We adaptively select the descent steps where the measure reduction is carried out.
We combine this with Block Coordinate Descent so that measure reduction can be done very cheaply.
arXiv Detail & Related papers (2020-06-02T17:52:59Z) - Privacy Amplification of Iterative Algorithms via Contraction
Coefficients [3.5270468102327004]
We investigate the framework of privacy amplification by iteration, recently proposed by Feldman et al., from an information-theoretic lens.
We demonstrate that differential privacy guarantees of iterative mappings can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for $f$-divergences.
arXiv Detail & Related papers (2020-01-17T22:06:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.