Private High-Dimensional Hypothesis Testing
- URL: http://arxiv.org/abs/2203.01537v1
- Date: Thu, 3 Mar 2022 06:25:48 GMT
- Title: Private High-Dimensional Hypothesis Testing
- Authors: Shyam Narayanan
- Abstract summary: We provide improved differentially private algorithms for identity testing of high-dimensional distributions.
Specifically, we can test whether the distribution comes from $mathcalN(mu*, Sigma)$ for some fixed $mu*$ or from some $mathcalN(mu*, Sigma)$ with total variation distance at least $alpha$.
- Score: 4.133655523622441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide improved differentially private algorithms for identity testing of
high-dimensional distributions. Specifically, for $d$-dimensional Gaussian
distributions with known covariance $\Sigma$, we can test whether the
distribution comes from $\mathcal{N}(\mu^*, \Sigma)$ for some fixed $\mu^*$ or
from some $\mathcal{N}(\mu, \Sigma)$ with total variation distance at least
$\alpha$ from $\mathcal{N}(\mu^*, \Sigma)$ with $(\varepsilon, 0)$-differential
privacy, using only \[\tilde{O}\left(\frac{d^{1/2}}{\alpha^2} +
\frac{d^{1/3}}{\alpha^{4/3} \cdot \varepsilon^{2/3}} + \frac{1}{\alpha \cdot
\varepsilon}\right)\] samples if the algorithm is allowed to be computationally
inefficient, and only \[\tilde{O}\left(\frac{d^{1/2}}{\alpha^2} +
\frac{d^{1/4}}{\alpha \cdot \varepsilon}\right)\] samples for a computationally
efficient algorithm. We also provide a matching lower bound showing that our
computationally inefficient algorithm has optimal sample complexity. We also
extend our algorithms to various related problems, including mean testing of
Gaussians with bounded but unknown covariance, uniformity testing of product
distributions over $\{\pm 1\}^d$, and tolerant testing.
Our results improve over the previous best work of Canonne, Kamath, McMillan,
Ullman, and Zakynthinou \cite{CanonneKMUZ20} for both computationally efficient
and inefficient algorithms, and even our computationally efficient algorithm
matches the optimal \emph{non-private} sample complexity of
$O\left(\frac{\sqrt{d}}{\alpha^2}\right)$ in many standard parameter settings.
In addition, our results show that, surprisingly, private identity testing of
$d$-dimensional Gaussians can be done with fewer samples than private identity
testing of discrete distributions over a domain of size $d$ \cite{AcharyaSZ18},
which refutes a conjectured lower bound of Canonne et al. \cite{CanonneKMUZ20}.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance
Estimation for Subgaussian Distributions [8.40077201352607]
We present a fast, differentially private algorithm for high-dimensional covariance-aware mean estimation.
Our algorithm produces $tildemu$ such that $|mu|_Sigma leq alpha$ as long as $n gtrsim tfrac d alpha2 + tfracd sqrtlog 1/deltaalpha varepsilon+fracdlog 1/deltavarepsilon$.
arXiv Detail & Related papers (2023-01-28T16:57:46Z) - Gaussian Mean Testing Made Simple [46.03021473600576]
Given i.i.d. samples from a distribution $p$ on $mathbbRd$, the task is to distinguish, with high probability, between the following cases.
We give an extremely simple algorithm for Gaussian mean testing with a one-page analysis.
arXiv Detail & Related papers (2022-10-25T01:59:13Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Robust Sparse Mean Estimation via Sum of Squares [42.526664955704746]
We study the problem of high-dimensional sparse mean estimation in the presence of an $epsilon$-fraction of adversarial outliers.
Our algorithms follow the Sum-of-Squares based, to algorithms approach.
arXiv Detail & Related papers (2022-06-07T16:49:54Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.