Optimal Testing of Discrete Distributions with High Probability
- URL: http://arxiv.org/abs/2009.06540v1
- Date: Mon, 14 Sep 2020 16:09:17 GMT
- Title: Optimal Testing of Discrete Distributions with High Probability
- Authors: Ilias Diakonikolas and Themis Gouleakis and Daniel M. Kane and John
Peebles and Eric Price
- Abstract summary: We study the problem of testing discrete distributions with a focus on the high probability regime.
We provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors.
- Score: 49.19942805582874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of testing discrete distributions with a focus on the
high probability regime. Specifically, given samples from one or more discrete
distributions, a property $\mathcal{P}$, and parameters $0< \epsilon, \delta
<1$, we want to distinguish {\em with probability at least $1-\delta$} whether
these distributions satisfy $\mathcal{P}$ or are $\epsilon$-far from
$\mathcal{P}$ in total variation distance. Most prior work in distribution
testing studied the constant confidence case (corresponding to $\delta =
\Omega(1)$), and provided sample-optimal testers for a range of properties.
While one can always boost the confidence probability of any such tester by
black-box amplification, this generic boosting method typically leads to
sub-optimal sample bounds.
Here we study the following broad question: For a given property
$\mathcal{P}$, can we {\em characterize} the sample complexity of testing
$\mathcal{P}$ as a function of all relevant problem parameters, including the
error probability $\delta$? Prior to this work, uniformity testing was the only
statistical task whose sample complexity had been characterized in this
setting. As our main results, we provide the first algorithms for closeness and
independence testing that are sample-optimal, within constant factors, as a
function of all relevant parameters. We also show matching
information-theoretic lower bounds on the sample complexity of these problems.
Our techniques naturally extend to give optimal testers for related problems.
To illustrate the generality of our methods, we give optimal algorithms for
testing collections of distributions and testing closeness with unequal sized
samples.
Related papers
- Stochastic Approximation Approaches to Group Distributionally Robust
Optimization [96.26317627118912]
Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Near-Optimal Bounds for Testing Histogram Distributions [35.18069719489173]
We show that the histogram testing problem has sample complexity $widetilde Theta (sqrtnk / varepsilon + k / varepsilon2 + sqrtn / varepsilon2)$.
arXiv Detail & Related papers (2022-07-14T01:24:01Z) - Sharp Constants in Uniformity Testing via the Huber Statistic [16.384142529375435]
Uniformity testing is one of the most well-studied problems in property testing.
It is known that the optimal sample complexity to distinguish the uniform distribution on $m$ elements from any $epsilon$-far distribution with $1-delta probability is $n.
We show that the collisions tester achieves a sharp maximal constant in the number of standard deviations of separation between uniform and non-uniform inputs.
arXiv Detail & Related papers (2022-06-21T20:43:53Z) - Exact Paired-Permutation Testing for Structured Test Statistics [67.71280539312536]
We provide an efficient exact algorithm for the paired-permutation test for a family of structured test statistics.
Our exact algorithm was $10$x faster than the Monte Carlo approximation with $20000$ samples on a common dataset.
arXiv Detail & Related papers (2022-05-03T11:00:59Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Downsampling for Testing and Learning in Product Distributions [24.48103093661132]
We study distribution-free property testing and learning problems where the unknown probability distribution is a product distribution over $mathbbRd$
For many important classes of functions, such as intersections of halfspaces, threshold functions, convex sets, and $k$-alternating functions, known algorithms either have complexity that depends on the support size of the distribution.
We introduce a general method, which we call downlog, that resolves these issues.
arXiv Detail & Related papers (2020-07-15T02:46:44Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.