As Easy as ABC: Adaptive Binning Coincidence Test for Uniformity Testing
- URL: http://arxiv.org/abs/2110.06325v1
- Date: Tue, 12 Oct 2021 20:19:57 GMT
- Title: As Easy as ABC: Adaptive Binning Coincidence Test for Uniformity Testing
- Authors: Sudeep Salgia, Qing Zhao, Lang Tong
- Abstract summary: We propose a sequential test that adapts to the unknown distribution under the alternative hypothesis.
We establish the sample complexity of the proposed tests as well as a lower bound.
- Score: 13.028716493611789
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of uniformity testing of Lipschitz continuous
distributions with bounded support. The alternative hypothesis is a composite
set of Lipschitz continuous distributions that are at least $\varepsilon$ away
in $\ell_1$ distance from the uniform distribution. We propose a sequential
test that adapts to the unknown distribution under the alternative hypothesis.
Referred to as the Adaptive Binning Coincidence (ABC) test, the proposed
strategy adapts in two ways. First, it partitions the set of alternative
distributions into layers based on their distances to the uniform distribution.
It then sequentially eliminates the alternative distributions layer by layer in
decreasing distance to the uniform, and subsequently takes advantage of
favorable situations of a distant alternative by exiting early. Second, it
adapts, across layers of the alternative distributions, the resolution level of
the discretization for computing the coincidence statistic. The farther away
the layer is from the uniform, the coarser the discretization is needed for
eliminating/exiting this layer. It thus exits both early in the detection
process and quickly by using a lower resolution to take advantage of favorable
alternative distributions. The ABC test builds on a novel sequential
coincidence test for discrete distributions, which is of independent interest.
We establish the sample complexity of the proposed tests as well as a lower
bound.
Related papers
- A Connection Between Learning to Reject and Bhattacharyya Divergences [57.942664964198286]
We consider learning a joint ideal distribution over both inputs and labels.<n>We develop a link between rejection and thresholding different statistical divergences.<n>In general, we find that rejecting via a Bhattacharyya divergence is less aggressive than Chow's Rule.
arXiv Detail & Related papers (2025-05-08T14:18:42Z) - Towards Self-Supervised Covariance Estimation in Deep Heteroscedastic Regression [102.24287051757469]
We study self-supervised covariance estimation in deep heteroscedastic regression.
We derive an upper bound on the 2-Wasserstein distance between normal distributions.
Experiments over a wide range of synthetic and real datasets demonstrate that the proposed 2-Wasserstein bound coupled with pseudo label annotations results in a computationally cheaper yet accurate deep heteroscedastic regression.
arXiv Detail & Related papers (2025-02-14T22:37:11Z) - Exponentially Consistent Statistical Classification of Continuous Sequences with Distribution Uncertainty [9.017367466798312]
We study multiple classification for continuous sequences with distribution uncertainty.
We propose distribution free tests and prove that the error probabilities of our tests decay exponentially fast for three different test designs.
arXiv Detail & Related papers (2024-10-29T07:06:40Z) - Gradual Domain Adaptation via Manifold-Constrained Distributionally Robust Optimization [0.4732176352681218]
This paper addresses the challenge of gradual domain adaptation within a class of manifold-constrained data distributions.
We propose a methodology rooted in Distributionally Robust Optimization (DRO) with an adaptive Wasserstein radius.
Our bounds rely on a newly introduced it compatibility measure, which fully characterizes the error propagation dynamics along the sequence.
arXiv Detail & Related papers (2024-10-17T22:07:25Z) - Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - An Improved Algorithm for Learning Drifting Discrete Distributions [2.2191203337341525]
We present a new adaptive algorithm for learning discrete distributions under distribution drift.
We observe a sequence of independent samples from a discrete distribution that is changing over time, and the goal is to estimate the current distribution.
To use more samples, we must resort to samples further in the past, and we incur a drift error due to the bias introduced by the change in distribution.
We present a novel adaptive algorithm that can solve this trade-off without any prior knowledge of the drift.
arXiv Detail & Related papers (2024-03-08T16:54:27Z) - Invariant Anomaly Detection under Distribution Shifts: A Causal
Perspective [6.845698872290768]
Anomaly detection (AD) is the machine learning task of identifying highly discrepant abnormal samples.
Under the constraints of a distribution shift, the assumption that training samples and test samples are drawn from the same distribution breaks down.
We attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
arXiv Detail & Related papers (2023-12-21T23:20:47Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Distribution Shift Inversion for Out-of-Distribution Prediction [57.22301285120695]
We propose a portable Distribution Shift Inversion algorithm for Out-of-Distribution (OoD) prediction.
We show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.
arXiv Detail & Related papers (2023-06-14T08:00:49Z) - Robust Learning of Optimal Auctions [84.13356290199603]
We study the problem of learning revenue-optimal multi-bidder auctions from samples when the samples of bidders' valuations can be adversarially corrupted or drawn from distributions that are adversarially perturbed.
We propose new algorithms that can learn a mechanism whose revenue is nearly optimal simultaneously for all true distributions'' that are $alpha$-close to the original distribution in Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2021-07-13T17:37:21Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.