Differentially Private Stochastic Convex Optimization in (Non)-Euclidean
Space Revisited
- URL: http://arxiv.org/abs/2303.18047v1
- Date: Fri, 31 Mar 2023 13:29:27 GMT
- Title: Differentially Private Stochastic Convex Optimization in (Non)-Euclidean
Space Revisited
- Authors: Jinyan Su and Changhong Zhao and Di Wang
- Abstract summary: We revisit the problem of Differentially Private Convex (DP-SCO) in Euclidelldd space.
For both strongly bound and strongly bound loss functions, we could achieve () outputs that are only dependent on the excess population.
We also show the width of the convex functions is optimal up to a logarithmic factor.
- Score: 6.339471679859413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we revisit the problem of Differentially Private Stochastic
Convex Optimization (DP-SCO) in Euclidean and general $\ell_p^d$ spaces.
Specifically, we focus on three settings that are still far from well
understood: (1) DP-SCO over a constrained and bounded (convex) set in Euclidean
space; (2) unconstrained DP-SCO in $\ell_p^d$ space; (3) DP-SCO with
heavy-tailed data over a constrained and bounded set in $\ell_p^d$ space. For
problem (1), for both convex and strongly convex loss functions, we propose
methods whose outputs could achieve (expected) excess population risks that are
only dependent on the Gaussian width of the constraint set rather than the
dimension of the space. Moreover, we also show the bound for strongly convex
functions is optimal up to a logarithmic factor. For problems (2) and (3), we
propose several novel algorithms and provide the first theoretical results for
both cases when $1<p<2$ and $2\leq p\leq \infty$.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Cubic regularized subspace Newton for non-convex optimization [3.481985817302898]
We analyze a coordinate second-order SSCN which can be interpreted as applying stationary regularization in random subspaces.
We demonstrate substantial speed-ups compared to conventional first-order methods.
arXiv Detail & Related papers (2024-06-24T14:20:02Z) - Nearly Optimal Regret for Decentralized Online Convex Optimization [53.433398074919]
Decentralized online convex optimization (D-OCO) aims to minimize a sequence of global loss functions using only local computations and communications.
We develop novel D-OCO algorithms that can respectively reduce the regret bounds for convex and strongly convex functions.
Our algorithms are nearly optimal in terms of $T$, $n$, and $rho$.
arXiv Detail & Related papers (2024-02-14T13:44:16Z) - Krylov Cubic Regularized Newton: A Subspace Second-Order Method with
Dimension-Free Convergence Rate [83.3933097134767]
We introduce a novel subspace cubic regularized Newton method that achieves a dimension-independent global convergence rate of $Oleft(frac1mk+frac1k2right)$.
Our method converges faster than existing random subspace methods, especially for high-dimensional problems.
arXiv Detail & Related papers (2024-01-05T20:24:18Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Private Convex Optimization in General Norms [23.166642097170545]
We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $normxcdot$.
We show that this mechanism satisfies differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization)
Our framework is the first to apply to private convex optimization in general normed spaces, and directly recovers non-private SCO rates achieved by mirror descent.
arXiv Detail & Related papers (2022-07-18T02:02:22Z) - Towards Painless Policy Optimization for Constrained MDPs [46.12526917024248]
We study policy optimization in an infinite horizon, $gamma$-discounted constrained Markov decision process (CMDP)
Our objective is to return a policy that achieves large expected reward with a small constraint violation.
We propose a generic primal-dual framework that allows us to bound the reward sub-optimality and constraint violation for arbitrary algorithms.
arXiv Detail & Related papers (2022-04-11T15:08:09Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Non-Euclidean Differentially Private Stochastic Convex Optimization [15.302167005107135]
We show that noisy gradient descent (SGD) algorithms attain the optimal excess risk in low-dimensional regimes.
Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
arXiv Detail & Related papers (2021-03-01T19:48:44Z) - On Differentially Private Stochastic Convex Optimization with
Heavy-tailed Data [18.351352536791453]
We consider the problem of designing Differentially Private (DP) algorithms for Convex Optimization (SCO) on heavy-tailed data.
The irregularity of such data violates some key assumptions used in almost all existing DP-SCO and DP-ERM methods.
We show that our algorithms can effectively deal with the challenges caused by data irregularity.
arXiv Detail & Related papers (2020-10-21T15:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.