A closed form scale bound for the $(\epsilon, \delta)$-differentially
private Gaussian Mechanism valid for all privacy regimes
- URL: http://arxiv.org/abs/2012.10523v2
- Date: Thu, 21 Jan 2021 14:01:55 GMT
- Title: A closed form scale bound for the $(\epsilon, \delta)$-differentially
private Gaussian Mechanism valid for all privacy regimes
- Authors: Staal A. Vinterbo
- Abstract summary: We present a similar closed form bound $sigma geq Delta (epsilonsqrt2)-1 left(sqrtaz+epsilon + ssqrtazright)$ for $epsilon in (0,1)$.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The standard closed form lower bound on $\sigma$ for providing $(\epsilon,
\delta)$-differential privacy by adding zero mean Gaussian noise with variance
$\sigma^2$ is $\sigma > \Delta\sqrt {2}(\epsilon^{-1}) \sqrt {\log \left(
5/4\delta^{-1} \right)}$ for $\epsilon \in (0,1)$. We present a similar closed
form bound $\sigma \geq \Delta (\epsilon\sqrt{2})^{-1} \left(\sqrt{az+\epsilon}
+ s\sqrt{az}\right)$ for $z=-\log(4\delta(1-\delta))$ and $(a,s)=(1,1)$ if
$\delta \leq 1/2$ and $(a,s)=(\pi/4,-1)$ otherwise. Our bound is valid for all
$\epsilon > 0$ and is always lower (better). We also present a sufficient
condition for $(\epsilon, \delta)$-differential privacy when adding noise
distributed according to even and log-concave densities supported everywhere.
Related papers
- Sparsifying Suprema of Gaussian Processes [6.638504164134713]
We show that there is an $O_varepsilon(1)$-size subset $S subseteq T$ and a set of real values $c_s_s in S$.
We also use our sparsification result for suprema of centered Gaussian processes to give a sparsification lemma for convex sets of bounded geometric width.
arXiv Detail & Related papers (2024-11-22T01:43:58Z) - Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - Coresets for Multiple $\ell_p$ Regression [47.790126028106734]
We construct coresets of size $tilde O(varepsilon-2d)$ for $p2$ and $tilde O(varepsilon-pdp/2)$ for $p>2$.
For $1p2$, every matrix has a subset of $tilde O(varepsilon-1k)$ rows which spans a $(varepsilon-1k)$-approximately optimal $k$-dimensional subspace for $ell_p$ subspace approximation
arXiv Detail & Related papers (2024-06-04T15:50:42Z) - Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity [22.615156512223763]
We propose variance- optimistic sliding (SVOGS) method, which takes the advantage of the finite-sum structure in the objective.
We prove $mathcal O(delta D2/varepsilon)$, communication complexity of $mathcal O(n+sqrtndelta D2/varepsilon)$, and local calls of $tildemathcal O(n+sqrtndelta+L)D2/varepsilon)$.
arXiv Detail & Related papers (2024-05-25T08:34:49Z) - $\ell_p$-Regression in the Arbitrary Partition Model of Communication [59.89387020011663]
We consider the randomized communication complexity of the distributed $ell_p$-regression problem in the coordinator model.
For $p = 2$, i.e., least squares regression, we give the first optimal bound of $tildeTheta(sd2 + sd/epsilon)$ bits.
For $p in (1,2)$,we obtain an $tildeO(sd2/epsilon + sd/mathrmpoly(epsilon)$ upper bound.
arXiv Detail & Related papers (2023-07-11T08:51:53Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Learning low-degree functions from a logarithmic number of random
queries [77.34726150561087]
We prove that for any integer $ninmathbbN$, $din1,ldots,n$ and any $varepsilon,deltain(0,1)$, a bounded function $f:-1,1nto[-1,1]$ of degree at most $d$ can be learned.
arXiv Detail & Related papers (2021-09-21T13:19:04Z) - Sharper bounds for online learning of smooth functions of a single
variable [0.0]
We show that $opt_1+epsilon(mathcalF_q) = Theta(epsilon-frac12)$, where the constants in the bound do not depend on $q$.
We also show that $opt_1+epsilon(mathcalF_q) = Theta(epsilon-frac12)$, where the constants in the bound do not depend on $q$.
arXiv Detail & Related papers (2021-05-30T23:06:21Z) - Mathematical comparison of classical and quantum mechanisms in
optimization under local differential privacy [0.0]
We show that $mathrmEC_n(varepsilon)not=mathrmCQ_n(varepsilon)$ for every $nge3$, and estimate the difference between $mathrmEC_n(varepsilon)$ and $mathrmCQ_n(varepsilon)$ in a certain manner.
arXiv Detail & Related papers (2020-11-19T17:05:11Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.