Non-Euclidean Differentially Private Stochastic Convex Optimization
- URL: http://arxiv.org/abs/2103.01278v1
- Date: Mon, 1 Mar 2021 19:48:44 GMT
- Title: Non-Euclidean Differentially Private Stochastic Convex Optimization
- Authors: Raef Bassily, Crist\'obal Guzm\'an, Anupama Nandi
- Abstract summary: We show that noisy gradient descent (SGD) algorithms attain the optimal excess risk in low-dimensional regimes.
Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
- Score: 15.302167005107135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentially private (DP) stochastic convex optimization (SCO) is a
fundamental problem, where the goal is to approximately minimize the population
risk with respect to a convex loss function, given a dataset of i.i.d. samples
from a distribution, while satisfying differential privacy with respect to the
dataset. Most of the existing works in the literature of private convex
optimization focus on the Euclidean (i.e., $\ell_2$) setting, where the loss is
assumed to be Lipschitz (and possibly smooth) w.r.t. the $\ell_2$ norm over a
constraint set with bounded $\ell_2$ diameter. Algorithms based on noisy
stochastic gradient descent (SGD) are known to attain the optimal excess risk
in this setting.
In this work, we conduct a systematic study of DP-SCO for $\ell_p$-setups.
For $p=1$, under a standard smoothness assumption, we give a new algorithm with
nearly optimal excess risk. This result also extends to general polyhedral
norms and feasible sets. For $p\in(1, 2)$, we give two new algorithms, whose
central building block is a novel privacy mechanism, which generalizes the
Gaussian mechanism. Moreover, we establish a lower bound on the excess risk for
this range of $p$, showing a necessary dependence on $\sqrt{d}$, where $d$ is
the dimension of the space. Our lower bound implies a sudden transition of the
excess risk at $p=1$, where the dependence on $d$ changes from logarithmic to
polynomial, resolving an open question in prior work [TTZ15] . For $p\in (2,
\infty)$, noisy SGD attains optimal excess risk in the low-dimensional regime;
in particular, this proves the optimality of noisy SGD for $p=\infty$. Our work
draws upon concepts from the geometry of normed spaces, such as the notions of
regularity, uniform convexity, and uniform smoothness.
Related papers
- Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions [19.008521454738425]
We study the problem of differentially private convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a $ktextth$-moment bound on the Lipschitz constants of sample functions rather than a uniform bound.
We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error $Gcdot frac 1 sqrt n + G_k cdot (fracsqrt dnepsilon)1
arXiv Detail & Related papers (2024-06-04T21:26:29Z) - PrivSGP-VR: Differentially Private Variance-Reduced Stochastic Gradient Push with Tight Utility Bounds [9.47030623916154]
We propose a differentially private decentralized learning method (termed PrivSGPVR) which employs gradient push with variance reduction and guarantees privacy for each node.
Our theoretical analysis shows that, under DP noise with constant variance, PrivGPS-VR achieves a sub-linear convergence rate of $mathcalO (1/sqrtnK)$.
arXiv Detail & Related papers (2024-05-04T11:22:53Z) - Near-Optimal differentially private low-rank trace regression with guaranteed private initialization [0.0]
We study differentially private (DP) estimation of a rank-$r$ matrix $M in RRd_1times d$ under the trace regression model.
We also propose a differentially private algorithm for estimating $M$ based on Riemannian optimization (DP-RGrad)
It is shown that the estimator given by DP-RGrad attains the optimal convergence rate in a weaker notion of differential privacy.
arXiv Detail & Related papers (2024-03-24T03:57:21Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Differentially Private Stochastic Convex Optimization in (Non)-Euclidean
Space Revisited [6.339471679859413]
We revisit the problem of Differentially Private Convex (DP-SCO) in Euclidelldd space.
For both strongly bound and strongly bound loss functions, we could achieve () outputs that are only dependent on the excess population.
We also show the width of the convex functions is optimal up to a logarithmic factor.
arXiv Detail & Related papers (2023-03-31T13:29:27Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Private Convex Optimization in General Norms [23.166642097170545]
We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $normxcdot$.
We show that this mechanism satisfies differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization)
Our framework is the first to apply to private convex optimization in general normed spaces, and directly recovers non-private SCO rates achieved by mirror descent.
arXiv Detail & Related papers (2022-07-18T02:02:22Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.