Mirror Descent Algorithms with Nearly Dimension-Independent Rates for
Differentially-Private Stochastic Saddle-Point Problems
- URL: http://arxiv.org/abs/2403.02912v1
- Date: Tue, 5 Mar 2024 12:28:00 GMT
- Title: Mirror Descent Algorithms with Nearly Dimension-Independent Rates for
Differentially-Private Stochastic Saddle-Point Problems
- Authors: Tom\'as Gonz\'alez and Crist\'obal Guzm\'an and Courtney Paquette
- Abstract summary: We propose $sqrtlog(d)/sqrtn + log(d)/[nvarepsilon]2/5$ to solve the problem of differentially-private saddle-points in the polyhedral setting.
We show that our algorithms attain a rate of $sqrtlog(d)/sqrtn + log(d)/[nvarepsilon]2/5$ with constant success.
- Score: 6.431793114484429
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of differentially-private (DP) stochastic
(convex-concave) saddle-points in the polyhedral setting. We propose
$(\varepsilon, \delta)$-DP algorithms based on stochastic mirror descent that
attain nearly dimension-independent convergence rates for the expected duality
gap, a type of guarantee that was known before only for bilinear objectives.
For convex-concave and first-order-smooth stochastic objectives, our algorithms
attain a rate of $\sqrt{\log(d)/n} + (\log(d)^{3/2}/[n\varepsilon])^{1/3}$,
where $d$ is the dimension of the problem and $n$ the dataset size. Under an
additional second-order-smoothness assumption, we improve the rate on the
expected gap to $\sqrt{\log(d)/n} + (\log(d)^{3/2}/[n\varepsilon])^{2/5}$.
Under this additional assumption, we also show, by using bias-reduced gradient
estimators, that the duality gap is bounded by $\log(d)/\sqrt{n} +
\log(d)/[n\varepsilon]^{1/2}$ with constant success probability. This result
provides evidence of the near-optimality of the approach. Finally, we show that
combining our methods with acceleration techniques from online learning leads
to the first algorithm for DP Stochastic Convex Optimization in the polyhedral
setting that is not based on Frank-Wolfe methods. For convex and
first-order-smooth stochastic objectives, our algorithms attain an excess risk
of $\sqrt{\log(d)/n} + \log(d)^{7/10}/[n\varepsilon]^{2/5}$, and when
additionally assuming second-order-smoothness, we improve the rate to
$\sqrt{\log(d)/n} + \log(d)/\sqrt{n\varepsilon}$. Instrumental to all of these
results are various extensions of the classical Maurey Sparsification Lemma,
which may be of independent interest.
Related papers
- Differentially Private Bilevel Optimization [4.07926531936425]
We present differentially private (DPright) algorithms for bilevel optimization.
These are the first algorithms for this task that are able to provide any desired empirical setting.
Our analysis covers constrained and unstudied problems alike.
arXiv Detail & Related papers (2024-09-29T21:52:38Z) - Differential Private Stochastic Optimization with Heavy-tailed Data: Towards Optimal Rates [15.27596975662702]
We explore algorithms achieving optimal rates of DP optimization with heavy-tailed gradients.
Our results match the minimax lower bound in citekamath2022, indicating that the theoretical limit of convex optimization under DP is achievable.
arXiv Detail & Related papers (2024-08-19T11:07:05Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Sharper Convergence Guarantees for Asynchronous SGD for Distributed and
Federated Learning [77.22019100456595]
We show a training algorithm for distributed computation workers with varying communication frequency.
In this work, we obtain a tighter convergence rate of $mathcalO!!!(sigma2-2_avg!! .
We also show that the heterogeneity term in rate is affected by the average delay within each worker.
arXiv Detail & Related papers (2022-06-16T17:10:57Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.