Sequential Kernelized Stein Discrepancy
- URL: http://arxiv.org/abs/2409.17505v1
- Date: Thu, 26 Sep 2024 03:24:59 GMT
- Title: Sequential Kernelized Stein Discrepancy
- Authors: Diego Martinez-Taboada, Aaditya Ramdas
- Abstract summary: We exploit the potential boundedness of the Stein kernel at arbitrary point evaluations to define test martingales.
We prove the validity of the test, as well as an lower bound for the logarithmic growth of the wealth process under the alternative.
- Score: 34.773470589069476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a sequential version of the kernelized Stein discrepancy, which
allows for conducting goodness-of-fit tests for unnormalized densities that are
continuously monitored and adaptively stopped. That is, the sample size need
not be fixed prior to data collection; the practitioner can choose whether to
stop the test or continue to gather evidence at any time while controlling the
false discovery rate. In stark contrast to related literature, we do not impose
uniform boundedness on the Stein kernel. Instead, we exploit the potential
boundedness of the Stein kernel at arbitrary point evaluations to define test
martingales, that give way to the subsequent novel sequential tests. We prove
the validity of the test, as well as an asymptotic lower bound for the
logarithmic growth of the wealth process under the alternative. We further
illustrate the empirical performance of the test with a variety of
distributions, including restricted Boltzmann machines.
Related papers
- The Polynomial Stein Discrepancy for Assessing Moment Convergence [1.0835264351334324]
We propose a novel method for measuring the discrepancy between a set of samples and a desired posterior distribution for Bayesian inference.
We show that the test has higher power than its competitors in several examples, and at a lower computational cost.
arXiv Detail & Related papers (2024-12-06T15:51:04Z) - Sequential Predictive Two-Sample and Independence Testing [114.4130718687858]
We study the problems of sequential nonparametric two-sample and independence testing.
We build upon the principle of (nonparametric) testing by betting.
arXiv Detail & Related papers (2023-04-29T01:30:33Z) - Near-Optimal Non-Parametric Sequential Tests and Confidence Sequences
with Possibly Dependent Observations [44.71254888821376]
We provide the first type-I-error and expected-rejection-time guarantees under general non-data generating processes.
We show how to apply our results to inference on parameters defined by estimating equations, such as average treatment effects.
arXiv Detail & Related papers (2022-12-29T18:37:08Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - Shortcomings of Top-Down Randomization-Based Sanity Checks for
Evaluations of Deep Neural Network Explanations [67.40641255908443]
We identify limitations of model-randomization-based sanity checks for the purpose of evaluating explanations.
Top-down model randomization preserves scales of forward pass activations with high probability.
arXiv Detail & Related papers (2022-11-22T18:52:38Z) - KSD Aggregated Goodness-of-fit Test [38.45086141837479]
We introduce a strategy to construct a test, called KSDAgg, which aggregates multiple tests with different kernels.
We provide non-asymptotic guarantees on the power of KSDAgg.
We find that KSDAgg outperforms other state-of-the-art adaptive KSD-based goodness-of-fit testing procedures.
arXiv Detail & Related papers (2022-02-02T00:33:09Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z) - Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event
Data [24.442094864838225]
We propose a collection of kernelized Stein discrepancy tests for time-to-event data.
Our experimental results show that our proposed methods perform better than existing tests.
arXiv Detail & Related papers (2020-08-19T12:27:43Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.