Stochastic Adaptive Line Search for Differentially Private Optimization
- URL: http://arxiv.org/abs/2008.07978v2
- Date: Thu, 27 Aug 2020 05:49:54 GMT
- Title: Stochastic Adaptive Line Search for Differentially Private Optimization
- Authors: Chen Chen, Jaewoo Lee
- Abstract summary: The performance of private gradient-based optimization algorithms is highly dependent on the choice step size (or learning rate)
We introduce a variant of classic non-trivial line search algorithm that adjusts the privacy gradient according to the reliability of noisy gradient.
We show that the adaptively chosen step sizes allow the proposed algorithm to efficiently use the privacy budget and show competitive performance against existing private gradients.
- Score: 6.281099620056346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of private gradient-based optimization algorithms is highly
dependent on the choice of step size (or learning rate) which often requires
non-trivial amount of tuning. In this paper, we introduce a stochastic variant
of classic backtracking line search algorithm that satisfies R\'enyi
differential privacy. Specifically, the proposed algorithm adaptively chooses
the step size satsisfying the the Armijo condition (with high probability)
using noisy gradients and function estimates. Furthermore, to improve the
probability with which the chosen step size satisfies the condition, it adjusts
per-iteration privacy budget during runtime according to the reliability of
noisy gradient. A naive implementation of the backtracking search algorithm may
end up using unacceptably large privacy budget as the ability of adaptive step
size selection comes at the cost of extra function evaluations. The proposed
algorithm avoids this problem by using the sparse vector technique combined
with the recent privacy amplification lemma. We also introduce a privacy budget
adaptation strategy in which the algorithm adaptively increases the budget when
it detects that directions pointed by consecutive gradients are drastically
different. Extensive experiments on both convex and non-convex problems show
that the adaptively chosen step sizes allow the proposed algorithm to
efficiently use the privacy budget and show competitive performance against
existing private optimizers.
Related papers
- On Constructing Algorithm Portfolios in Algorithm Selection for Computationally Expensive Black-box Optimization in the Fixed-budget Setting [0.0]
This paper argues the importance of considering the number of function evaluations used in the sampling phase when constructing algorithm portfolios.
The results show that algorithm portfolios constructed by our approach perform significantly better than those by the previous approach.
arXiv Detail & Related papers (2024-05-13T03:31:13Z) - Dynamic Privacy Allocation for Locally Differentially Private Federated
Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2023-08-02T13:30:33Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Bring Your Own Algorithm for Optimal Differentially Private Stochastic
Minimax Optimization [44.52870407321633]
holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss.
We provide a general framework for solving differentially private minimax optimization (DP-SMO) problems.
Our framework is inspired from the recently proposed Phased-ERM method [20] for nonsmooth differentially private convex optimization (DP-SCO)
arXiv Detail & Related papers (2022-06-01T10:03:20Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - Sequential Quadratic Optimization for Nonlinear Equality Constrained
Stochastic Optimization [10.017195276758454]
It is assumed in this setting that it is intractable to compute objective function and derivative values explicitly.
An algorithm is proposed for the deterministic setting that is modeled after a state-of-the-art line-search SQP algorithm.
The results of numerical experiments demonstrate the practical performance of our proposed techniques.
arXiv Detail & Related papers (2020-07-20T23:04:26Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.