Bring Your Own Algorithm for Optimal Differentially Private Stochastic
Minimax Optimization
- URL: http://arxiv.org/abs/2206.00363v1
- Date: Wed, 1 Jun 2022 10:03:20 GMT
- Title: Bring Your Own Algorithm for Optimal Differentially Private Stochastic
Minimax Optimization
- Authors: Liang Zhang, Kiran Koshy Thekumparampil, Sewoong Oh, Niao He
- Abstract summary: holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss.
We provide a general framework for solving differentially private minimax optimization (DP-SMO) problems.
Our framework is inspired from the recently proposed Phased-ERM method [20] for nonsmooth differentially private convex optimization (DP-SCO)
- Score: 44.52870407321633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study differentially private (DP) algorithms for smooth stochastic minimax
optimization, with stochastic minimization as a byproduct. The holy grail of
these settings is to guarantee the optimal trade-off between the privacy and
the excess population loss, using an algorithm with a linear time-complexity in
the number of training samples. We provide a general framework for solving
differentially private stochastic minimax optimization (DP-SMO) problems, which
enables the practitioners to bring their own base optimization algorithm and
use it as a black-box to obtain the near-optimal privacy-loss trade-off. Our
framework is inspired from the recently proposed Phased-ERM method [20] for
nonsmooth differentially private stochastic convex optimization (DP-SCO), which
exploits the stability of the empirical risk minimization (ERM) for the privacy
guarantee. The flexibility of our approach enables us to sidestep the
requirement that the base algorithm needs to have bounded sensitivity, and
allows the use of sophisticated variance-reduced accelerated methods to achieve
near-linear time-complexity. To the best of our knowledge, these are the first
linear-time optimal algorithms, up to logarithmic factors, for smooth DP-SMO
when the objective is (strongly-)convex-(strongly-)concave. Additionally, based
on our flexible framework, we derive a new family of near-linear time
algorithms for smooth DP-SCO with optimal privacy-loss trade-offs for a wider
range of smoothness parameters compared to previous algorithms.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Private Stochastic Convex Optimization: Efficient Algorithms for
Non-smooth Objectives [28.99826590351627]
We propose an algorithm based on noisy mirror which achieves a first-order descent, inversely in the regime when the privacy parameter is proportional to the number of samples.
arXiv Detail & Related papers (2020-02-22T03:03:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.