Adaptive Differentially Private Empirical Risk Minimization
- URL: http://arxiv.org/abs/2110.07435v1
- Date: Thu, 14 Oct 2021 15:02:20 GMT
- Title: Adaptive Differentially Private Empirical Risk Minimization
- Authors: Xiaoxia Wu and Lingxiao Wang and Irina Cristali and Quanquan Gu and
Rebecca Willett
- Abstract summary: We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
- Score: 95.04948014513226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an adaptive (stochastic) gradient perturbation method for
differentially private empirical risk minimization. At each iteration, the
random noise added to the gradient is optimally adapted to the stepsize; we
name this process adaptive differentially private (ADP) learning. Given the
same privacy budget, we prove that the ADP method considerably improves the
utility guarantee compared to the standard differentially private method in
which vanilla random noise is added. Our method is particularly useful for
gradient-based algorithms with time-varying learning rates, including variants
of AdaGrad (Duchi et al., 2011). We provide extensive numerical experiments to
demonstrate the effectiveness of the proposed adaptive differentially private
algorithm.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - DP-SGD with weight clipping [1.0878040851638]
We present a novel approach that mitigates the bias arising from traditional gradient clipping.
By leveraging a public upper bound of the Lipschitz value of the current model and its current location within the search domain, we can achieve refined noise level adjustments.
arXiv Detail & Related papers (2023-10-27T09:17:15Z) - Differentially Private Learning with Per-Sample Adaptive Clipping [8.401653565794353]
We propose a Differentially Private Per-Sample Adaptive Clipping (DP-PSAC) algorithm based on a non-monotonic adaptive weight function.
We show that DP-PSAC outperforms or matches the state-of-the-art methods on multiple main-stream vision and language tasks.
arXiv Detail & Related papers (2022-12-01T07:26:49Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Private Adaptive Gradient Methods for Convex Optimization [32.3523019355048]
We propose and analyze differentially private variants of a Gradient Descent (SGD) algorithm with adaptive stepsizes.
We provide upper bounds on the regret of both algorithms and show that the bounds are (worst-case) optimal.
arXiv Detail & Related papers (2021-06-25T16:46:45Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Stochastic Adaptive Line Search for Differentially Private Optimization [6.281099620056346]
The performance of private gradient-based optimization algorithms is highly dependent on the choice step size (or learning rate)
We introduce a variant of classic non-trivial line search algorithm that adjusts the privacy gradient according to the reliability of noisy gradient.
We show that the adaptively chosen step sizes allow the proposed algorithm to efficiently use the privacy budget and show competitive performance against existing private gradients.
arXiv Detail & Related papers (2020-08-18T15:18:47Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.