User-level Differentially Private Stochastic Convex Optimization:
Efficient Algorithms with Optimal Rates
- URL: http://arxiv.org/abs/2311.03797v1
- Date: Tue, 7 Nov 2023 08:26:51 GMT
- Title: User-level Differentially Private Stochastic Convex Optimization:
Efficient Algorithms with Optimal Rates
- Authors: Hilal Asi, Daogao Liu
- Abstract summary: We develop new algorithms for user-level DP-SCO that obtain optimal rates for both convex and strongly convex functions in runtime time.
Our algorithms are the first to obtain optimal rates for non-smooth functions in time.
- Score: 16.958088684785668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study differentially private stochastic convex optimization (DP-SCO) under
user-level privacy, where each user may hold multiple data items. Existing work
for user-level DP-SCO either requires super-polynomial runtime [Ghazi et al.
(2023)] or requires the number of users to grow polynomially with the
dimensionality of the problem with additional strict assumptions [Bassily et
al. (2023)]. We develop new algorithms for user-level DP-SCO that obtain
optimal rates for both convex and strongly convex functions in polynomial time
and require the number of users to grow only logarithmically in the dimension.
Moreover, our algorithms are the first to obtain optimal rates for non-smooth
functions in polynomial time. These algorithms are based on multiple-pass
DP-SGD, combined with a novel private mean estimation procedure for
concentrated data, which applies an outlier removal step before estimating the
mean of the gradients.
Related papers
- Faster Algorithms for User-Level Private Stochastic Convex Optimization [16.59551503680919]
We study private convex optimization (SCO) under user-level differential privacy constraints.
Existing algorithms for user-level DP SCO are impractical in many large-scale machine learning scenarios.
We provide novel user-level DP algorithms with state-of-the-art excess risk and runtime guarantees.
arXiv Detail & Related papers (2024-10-24T03:02:33Z) - Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv Detail & Related papers (2024-05-28T19:02:30Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - User-Level Differential Privacy With Few Examples Per User [73.81862394073308]
We consider the example-scarce regime, where each user has only a few examples, and obtain the following results.
For approximate-DP, we give a generic transformation of any item-level DP algorithm to a user-level DP algorithm.
We present a simple technique for adapting the exponential mechanism [McSherry, Talwar FOCS 2007] to the user-level setting.
arXiv Detail & Related papers (2023-09-21T21:51:55Z) - Bring Your Own Algorithm for Optimal Differentially Private Stochastic
Minimax Optimization [44.52870407321633]
holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss.
We provide a general framework for solving differentially private minimax optimization (DP-SMO) problems.
Our framework is inspired from the recently proposed Phased-ERM method [20] for nonsmooth differentially private convex optimization (DP-SCO)
arXiv Detail & Related papers (2022-06-01T10:03:20Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - SGD with shuffling: optimal rates without component convexity and large
epoch requirements [60.65928290219793]
We consider the RandomShuffle (shuffle at the beginning of each epoch) and SingleShuffle (shuffle only once)
We establish minimax optimal convergence rates of these algorithms up to poly-log factor gaps.
We further sharpen the tight convergence results for RandomShuffle by removing the drawbacks common to all prior arts.
arXiv Detail & Related papers (2020-06-12T05:00:44Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Private Stochastic Convex Optimization: Efficient Algorithms for
Non-smooth Objectives [28.99826590351627]
We propose an algorithm based on noisy mirror which achieves a first-order descent, inversely in the regime when the privacy parameter is proportional to the number of samples.
arXiv Detail & Related papers (2020-02-22T03:03:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.