Model-Free Sequential Testing for Conditional Independence via Testing
by Betting
- URL: http://arxiv.org/abs/2210.00354v1
- Date: Sat, 1 Oct 2022 20:05:33 GMT
- Title: Model-Free Sequential Testing for Conditional Independence via Testing
by Betting
- Authors: Shalev Shaer, Gal Maman, Yaniv Romano
- Abstract summary: The proposed test allows researchers to analyze an incoming i.i.d. data stream with any arbitrary dependency structure.
We allow the processing of data points online as soon as they arrive and stop data acquisition once significant results are detected.
- Score: 8.293345261434943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper develops a model-free sequential test for conditional
independence. The proposed test allows researchers to analyze an incoming
i.i.d. data stream with any arbitrary dependency structure, and safely conclude
whether a feature is conditionally associated with the response under study. We
allow the processing of data points online as soon as they arrive and stop data
acquisition once significant results are detected while rigorously controlling
the type-I error rate. Our test can work with any sophisticated machine
learning algorithm to enhance data efficiency to the extent possible. The
developed method is inspired by two statistical frameworks. The first is the
model-X conditional randomization test, a test for conditional independence
that is valid in offline settings where the sample size is fixed in advance.
The second is testing by betting, a "game-theoretic" approach for sequential
hypothesis testing. We conduct synthetic experiments to demonstrate the
advantage of our test over out-of-the-box sequential tests that account for the
multiplicity of tests in the time horizon, and demonstrate the practicality of
our proposal by applying it to real-world tasks.
Related papers
- Practical Kernel Tests of Conditional Independence [34.7957227546996]
A major challenge of conditional independence testing is to obtain the correct test level while still attaining competitive test power.
We propose three methods for bias control to correct the test level, based on data splitting, auxiliary data, and (where possible) simpler function classes.
arXiv Detail & Related papers (2024-02-20T18:07:59Z) - Precise Error Rates for Computationally Efficient Testing [75.63895690909241]
We revisit the question of simple-versus-simple hypothesis testing with an eye towards computational complexity.
An existing test based on linear spectral statistics achieves the best possible tradeoff curve between type I and type II error rates.
arXiv Detail & Related papers (2023-11-01T04:41:16Z) - Deep anytime-valid hypothesis testing [29.273915933729057]
We propose a general framework for constructing powerful, sequential hypothesis tests for nonparametric testing problems.
We develop a principled approach of leveraging the representation capability of machine learning models within the testing-by-betting framework.
Empirical results on synthetic and real-world datasets demonstrate that tests instantiated using our general framework are competitive against specialized baselines.
arXiv Detail & Related papers (2023-10-30T09:46:19Z) - Sequential Predictive Two-Sample and Independence Testing [114.4130718687858]
We study the problems of sequential nonparametric two-sample and independence testing.
We build upon the principle of (nonparametric) testing by betting.
arXiv Detail & Related papers (2023-04-29T01:30:33Z) - Active Sequential Two-Sample Testing [18.99517340397671]
We consider the two-sample testing problem in a new scenario where sample measurements are inexpensive to access.
We devise the first emphactiveNIST-sample testing framework that not only sequentially but also emphactively queries.
In practice, we introduce an instantiation of our framework and evaluate it using several experiments.
arXiv Detail & Related papers (2023-01-30T02:23:49Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - Learning to Increase the Power of Conditional Randomization Tests [8.883733362171032]
The model-X conditional randomization test is a generic framework for conditional independence testing.
We introduce novel model-fitting schemes that are designed to explicitly improve the power of model-X tests.
arXiv Detail & Related papers (2022-07-03T12:29:25Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - Nonparametric Conditional Local Independence Testing [69.31200003384122]
Conditional local independence is an independence relation among continuous time processes.
No nonparametric test of conditional local independence has been available.
We propose such a nonparametric test based on double machine learning.
arXiv Detail & Related papers (2022-03-25T10:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.