Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry
- URL: http://arxiv.org/abs/2103.01516v1
- Date: Tue, 2 Mar 2021 06:53:44 GMT
- Title: Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry
- Authors: Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
- Abstract summary: Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
- Score: 69.24618367447101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous
in machine learning applications such as LASSO but remains poorly understood
when learning with differential privacy. We show that, up to logarithmic
factors the optimal excess population loss of any
$(\varepsilon,\delta)$-differentially private optimizer is $\sqrt{\log(d)/n} +
\sqrt{d}/\varepsilon n.$ The upper bound is based on a new algorithm that
combines the iterative localization approach of~\citet{FeldmanKoTa20} with a
new analysis of private regularized mirror descent. It applies to $\ell_p$
bounded domains for $p\in [1,2]$ and queries at most $n^{3/2}$ gradients
improving over the best previously known algorithm for the $\ell_2$ case which
needs $n^2$ gradients. Further, we show that when the loss functions satisfy
additional smoothness assumptions, the excess loss is upper bounded (up to
logarithmic factors) by $\sqrt{\log(d)/n} + (\log(d)/\varepsilon n)^{2/3}.$
This bound is achieved by a new variance-reduced version of the Frank-Wolfe
algorithm that requires just a single pass over the data. We also show that the
lower bound in this case is the minimum of the two rates mentioned above.
Related papers
- Online Newton Method for Bandit Convex Optimisation [28.66596225688161]
We introduce a computationally efficient algorithm for zeroth-order bandit convex optimisation.
We prove that in the adversarial setting its regret is at most $d3.5 sqrtn mathrmpolylog(n, d)$ with high probability where $d$ is the time horizon.
In the setting the bound improves to $M d2 sqrtn mathrmpolylog(n, d)$ where $M in [d-1/2, d-1 / 4]$ is
arXiv Detail & Related papers (2024-06-10T17:44:11Z) - Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity [22.615156512223763]
We propose variance- optimistic sliding (SVOGS) method, which takes the advantage of the finite-sum structure in the objective.
We prove $mathcal O(delta D2/varepsilon)$, communication complexity of $mathcal O(n+sqrtndelta D2/varepsilon)$, and local calls of $tildemathcal O(n+sqrtndelta+L)D2/varepsilon)$.
arXiv Detail & Related papers (2024-05-25T08:34:49Z) - A Whole New Ball Game: A Primal Accelerated Method for Matrix Games and
Minimizing the Maximum of Smooth Functions [44.655316553524855]
We design algorithms for minimizing $max_iin[n] f_i(x) over a $d$-dimensional Euclidean or simplex domain.
When each $f_i$ is $1$-Lipschitz and $1$-smooth, our method computes an $epsilon-approximate solution.
arXiv Detail & Related papers (2023-11-17T22:07:18Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Differentially Private Stochastic Optimization: New Results in Convex
and Non-Convex Settings [15.122617358656539]
We present differentially private algorithms for convex and non-smooth general losses (GLLs)
For the convex case, we focus on the family of non-smooth generalized losses (GLLs)
arXiv Detail & Related papers (2021-07-12T17:06:08Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - $Q$-learning with Logarithmic Regret [60.24952657636464]
We prove that an optimistic $Q$-learning enjoys a $mathcalOleft(fracSAcdot mathrmpolyleft(Hright)Delta_minlogleft(SATright)right)$ cumulative regret bound, where $S$ is the number of states, $A$ is the number of actions, $H$ is the planning horizon, $T$ is the total number of steps, and $Delta_min$ is the minimum sub-optimality gap.
arXiv Detail & Related papers (2020-06-16T13:01:33Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.