Mathematical comparison of classical and quantum mechanisms in
optimization under local differential privacy
- URL: http://arxiv.org/abs/2011.09960v2
- Date: Sun, 19 Dec 2021 00:15:01 GMT
- Title: Mathematical comparison of classical and quantum mechanisms in
optimization under local differential privacy
- Authors: Yuuya Yoshida
- Abstract summary: We show that $mathrmEC_n(varepsilon)not=mathrmCQ_n(varepsilon)$ for every $nge3$, and estimate the difference between $mathrmEC_n(varepsilon)$ and $mathrmCQ_n(varepsilon)$ in a certain manner.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Let $\varepsilon>0$. An $n$-tuple $(p_i)_{i=1}^n$ of probability vectors is
called $\varepsilon$-differentially private ($\varepsilon$-DP) if
$e^\varepsilon p_j-p_i$ has no negative entries for all $i,j=1,\ldots,n$. An
$n$-tuple $(\rho_i)_{i=1}^n$ of density matrices is called classical-quantum
$\varepsilon$-differentially private (CQ $\varepsilon$-DP) if
$e^\varepsilon\rho_j-\rho_i$ is positive semi-definite for all
$i,j=1,\ldots,n$. Denote by $\mathrm{C}_n(\varepsilon)$ the set of all
$\varepsilon$-DP $n$-tuples, and by $\mathrm{CQ}_n(\varepsilon)$ the set of all
CQ $\varepsilon$-DP $n$-tuples. By considering optimization problems under
local differential privacy, we define the subset $\mathrm{EC}_n(\varepsilon)$
of $\mathrm{CQ}_n(\varepsilon)$ that is essentially classical. Roughly
speaking, an element in $\mathrm{EC}_n(\varepsilon)$ is the image of
$(p_i)_{i=1}^n\in\mathrm{C}_n(\varepsilon)$ by a completely positive and
trace-preserving linear map (CPTP map). In a preceding study, it is known that
$\mathrm{EC}_2(\varepsilon)=\mathrm{CQ}_2(\varepsilon)$. In this paper, we show
that $\mathrm{EC}_n(\varepsilon)\not=\mathrm{CQ}_n(\varepsilon)$ for every
$n\ge3$, and estimate the difference between $\mathrm{EC}_n(\varepsilon)$ and
$\mathrm{CQ}_n(\varepsilon)$ in a certain manner.
Related papers
- Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits [55.957560311008926]
We propose a piecewise stationary linear bandit (PSLB) model where the quality of an arm is measured by its return averaged over all contexts.
PS$varepsilon$BAI$+$ is guaranteed to identify an $varepsilon$-optimal arm with probability $ge 1-delta$ and with a minimal number of samples.
arXiv Detail & Related papers (2024-10-10T06:15:42Z) - Coresets for Multiple $\ell_p$ Regression [47.790126028106734]
We construct coresets of size $tilde O(varepsilon-2d)$ for $p2$ and $tilde O(varepsilon-pdp/2)$ for $p>2$.
For $1p2$, every matrix has a subset of $tilde O(varepsilon-1k)$ rows which spans a $(varepsilon-1k)$-approximately optimal $k$-dimensional subspace for $ell_p$ subspace approximation
arXiv Detail & Related papers (2024-06-04T15:50:42Z) - $\ell_p$-Regression in the Arbitrary Partition Model of Communication [59.89387020011663]
We consider the randomized communication complexity of the distributed $ell_p$-regression problem in the coordinator model.
For $p = 2$, i.e., least squares regression, we give the first optimal bound of $tildeTheta(sd2 + sd/epsilon)$ bits.
For $p in (1,2)$,we obtain an $tildeO(sd2/epsilon + sd/mathrmpoly(epsilon)$ upper bound.
arXiv Detail & Related papers (2023-07-11T08:51:53Z) - For Kernel Range Spaces a Constant Number of Queries Are Sufficient [13.200502573462712]
A kernel range space concerns a set of points $X subset mathbbRd$ and the space of all queries by a fixed kernel.
Anvarepsilon$-cover is a subset of points $Q subset mathbbRd$ for any $p in mathbbRd$ that $frac1n |R_p - R_q|leq varepsilon$ for some $q in Q$.
arXiv Detail & Related papers (2023-06-28T19:19:33Z) - Improved Coresets for Euclidean $k$-Means [24.850829728643923]
Given a set of $n$ points in $d$ dimensions, the Euclidean $k$-means problem (resp. the Euclidean $k$-median problem) consists of finding $k$ centers.
In this paper, we improve the upper bounds $tilde O(min(k2 cdot varepsilon-2,kcdot varepsilon-4)$ for $k$-means and $tilde O(min(k4/3 cdot varepsilon
arXiv Detail & Related papers (2022-11-15T14:47:24Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Learning low-degree functions from a logarithmic number of random
queries [77.34726150561087]
We prove that for any integer $ninmathbbN$, $din1,ldots,n$ and any $varepsilon,deltain(0,1)$, a bounded function $f:-1,1nto[-1,1]$ of degree at most $d$ can be learned.
arXiv Detail & Related papers (2021-09-21T13:19:04Z) - Self-training Converts Weak Learners to Strong Learners in Mixture
Models [86.7137362125503]
We show that a pseudolabeler $boldsymbolbeta_mathrmpl$ can achieve classification error at most $C_mathrmerr$.
We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $boldsymbolbeta_mathrmpl$ with classification error $C_mathrmerr$ using only $O(d)$ labeled examples.
arXiv Detail & Related papers (2021-06-25T17:59:16Z) - Sharper bounds for online learning of smooth functions of a single
variable [0.0]
We show that $opt_1+epsilon(mathcalF_q) = Theta(epsilon-frac12)$, where the constants in the bound do not depend on $q$.
We also show that $opt_1+epsilon(mathcalF_q) = Theta(epsilon-frac12)$, where the constants in the bound do not depend on $q$.
arXiv Detail & Related papers (2021-05-30T23:06:21Z) - A closed form scale bound for the $(\epsilon, \delta)$-differentially
private Gaussian Mechanism valid for all privacy regimes [0.0]
We present a similar closed form bound $sigma geq Delta (epsilonsqrt2)-1 left(sqrtaz+epsilon + ssqrtazright)$ for $epsilon in (0,1)$.
arXiv Detail & Related papers (2020-12-18T21:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.