Private Convex Optimization in General Norms
- URL: http://arxiv.org/abs/2207.08347v1
- Date: Mon, 18 Jul 2022 02:02:22 GMT
- Title: Private Convex Optimization in General Norms
- Authors: Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian
- Abstract summary: We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $normxcdot$.
We show that this mechanism satisfies differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization)
Our framework is the first to apply to private convex optimization in general normed spaces, and directly recovers non-private SCO rates achieved by mirror descent.
- Score: 23.166642097170545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new framework for differentially private optimization of convex
functions which are Lipschitz in an arbitrary norm $\normx{\cdot}$. Our
algorithms are based on a regularized exponential mechanism which samples from
the density $\propto \exp(-k(F+\mu r))$ where $F$ is the empirical loss and $r$
is a regularizer which is strongly convex with respect to $\normx{\cdot}$,
generalizing a recent work of \cite{GLL22} to non-Euclidean settings. We show
that this mechanism satisfies Gaussian differential privacy and solves both
DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex
optimization), by using localization tools from convex geometry. Our framework
is the first to apply to private convex optimization in general normed spaces,
and directly recovers non-private SCO rates achieved by mirror descent, as the
privacy parameter $\eps \to \infty$. As applications, for Lipschitz
optimization in $\ell_p$ norms for all $p \in (1, 2)$, we obtain the first
optimal privacy-utility tradeoffs; for $p = 1$, we improve tradeoffs obtained
by the recent works \cite{AsiFKT21, BassilyGN21} by at least a logarithmic
factor. Our $\ell_p$ norm and Schatten-$p$ norm optimization frameworks are
complemented with polynomial-time samplers whose query complexity we explicitly
bound.
Related papers
- Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions [19.008521454738425]
We study the problem of differentially private convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a $ktextth$-moment bound on the Lipschitz constants of sample functions rather than a uniform bound.
We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error $Gcdot frac 1 sqrt n + G_k cdot (fracsqrt dnepsilon)1
arXiv Detail & Related papers (2024-06-04T21:26:29Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - DIFF2: Differential Private Optimization via Gradient Differences for
Nonconvex Distributed Learning [58.79085525115987]
In the previous work, the best known utility bound is $widetilde O(d2/3/(nvarepsilon_mathrmDP)4/3)$.
We propose a new differential private framework called mphDIFF2 (DIFFerential private via DIFFs) that constructs a differential private framework.
$mphDIFF2 with a global descent achieves the utility of $widetilde O(d2/3/(nvarepsilon_mathrmDP)4/3
arXiv Detail & Related papers (2023-02-08T05:19:01Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Sharper Utility Bounds for Differentially Private Models [20.246768861331276]
First $mathcalObig(fracsqrtpnepsilonbig)$ high probability excess population risk bound for differentially private algorithms.
New proposed algorithm m-NGP improves the performance of the differentially private model over real datasets.
arXiv Detail & Related papers (2022-04-22T07:03:13Z) - Private Convex Optimization via Exponential Mechanism [16.867534746193833]
We show that modifying the exponential mechanism by adding an $ellcave2$ regularizer to $F(x)$ recovers both the known optimal empirical risk and population loss under $(epsilon,delta)$-DP.
We also show how to implement this mechanism using $widetildeO(n min(d, n)) queries to $f_i(x) for the DP-SCO where $n$ is the number of samples/optimal and $d is the ambient dimension.
arXiv Detail & Related papers (2022-03-01T06:51:03Z) - A first-order primal-dual method with adaptivity to local smoothness [64.62056765216386]
We consider the problem of finding a saddle point for the convex-concave objective $min_x max_y f(x) + langle Ax, yrangle - g*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth.
We propose an adaptive version of the Condat-Vu algorithm, which alternates between primal gradient steps and dual steps.
arXiv Detail & Related papers (2021-10-28T14:19:30Z) - Differentially Private Stochastic Optimization: New Results in Convex
and Non-Convex Settings [15.122617358656539]
We present differentially private algorithms for convex and non-smooth general losses (GLLs)
For the convex case, we focus on the family of non-smooth generalized losses (GLLs)
arXiv Detail & Related papers (2021-07-12T17:06:08Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Non-Euclidean Differentially Private Stochastic Convex Optimization [15.302167005107135]
We show that noisy gradient descent (SGD) algorithms attain the optimal excess risk in low-dimensional regimes.
Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
arXiv Detail & Related papers (2021-03-01T19:48:44Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.