Sharper Utility Bounds for Differentially Private Models
- URL: http://arxiv.org/abs/2204.10536v1
- Date: Fri, 22 Apr 2022 07:03:13 GMT
- Title: Sharper Utility Bounds for Differentially Private Models
- Authors: Yilin Kang, Yong Liu, Jian Li, Weiping Wang
- Abstract summary: First $mathcalObig(fracsqrtpnepsilonbig)$ high probability excess population risk bound for differentially private algorithms.
New proposed algorithm m-NGP improves the performance of the differentially private model over real datasets.
- Score: 20.246768861331276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, by introducing Generalized Bernstein condition, we propose the
first $\mathcal{O}\big(\frac{\sqrt{p}}{n\epsilon}\big)$ high probability excess
population risk bound for differentially private algorithms under the
assumptions $G$-Lipschitz, $L$-smooth, and Polyak-{\L}ojasiewicz condition,
based on gradient perturbation method. If we replace the properties
$G$-Lipschitz and $L$-smooth by $\alpha$-H{\"o}lder smoothness (which can be
used in non-smooth setting), the high probability bound comes to
$\mathcal{O}\big(n^{-\frac{\alpha}{1+2\alpha}}\big)$ w.r.t $n$, which cannot
achieve $\mathcal{O}\left(1/n\right)$ when $\alpha\in(0,1]$. To solve this
problem, we propose a variant of gradient perturbation method,
\textbf{max$\{1,g\}$-Normalized Gradient Perturbation} (m-NGP). We further show
that by normalization, the high probability excess population risk bound under
assumptions $\alpha$-H{\"o}lder smooth and Polyak-{\L}ojasiewicz condition can
achieve $\mathcal{O}\big(\frac{\sqrt{p}}{n\epsilon}\big)$, which is the first
$\mathcal{O}\left(1/n\right)$ high probability excess population risk bound
w.r.t $n$ for differentially private algorithms under non-smooth conditions.
Moreover, we evaluate the performance of the new proposed algorithm m-NGP, the
experimental results show that m-NGP improves the performance of the
differentially private model over real datasets. It demonstrates that m-NGP
improves the utility bound and the accuracy of the DP model on real datasets
simultaneously.
Related papers
- Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions [19.008521454738425]
We study the problem of differentially private convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a $ktextth$-moment bound on the Lipschitz constants of sample functions rather than a uniform bound.
We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error $Gcdot frac 1 sqrt n + G_k cdot (fracsqrt dnepsilon)1
arXiv Detail & Related papers (2024-06-04T21:26:29Z) - A Fast Algorithm for Adaptive Private Mean Estimation [5.090363690988394]
We design an $(varepsilon, delta)$-differentially private algorithm that is adaptive to $Sigma$.
The estimator achieves optimal rates of convergence with respect to the induced Mahalanobis norm $||cdot||_Sigma$.
arXiv Detail & Related papers (2023-01-17T18:44:41Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Private Convex Optimization in General Norms [23.166642097170545]
We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $normxcdot$.
We show that this mechanism satisfies differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization)
Our framework is the first to apply to private convex optimization in general normed spaces, and directly recovers non-private SCO rates achieved by mirror descent.
arXiv Detail & Related papers (2022-07-18T02:02:22Z) - Differentially Private Stochastic Optimization: New Results in Convex
and Non-Convex Settings [15.122617358656539]
We present differentially private algorithms for convex and non-smooth general losses (GLLs)
For the convex case, we focus on the family of non-smooth generalized losses (GLLs)
arXiv Detail & Related papers (2021-07-12T17:06:08Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Non-Euclidean Differentially Private Stochastic Convex Optimization [15.302167005107135]
We show that noisy gradient descent (SGD) algorithms attain the optimal excess risk in low-dimensional regimes.
Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
arXiv Detail & Related papers (2021-03-01T19:48:44Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.