A Simple Unified Approach to Testing High-Dimensional Conditional
Independences for Categorical and Ordinal Data
- URL: http://arxiv.org/abs/2206.04356v1
- Date: Thu, 9 Jun 2022 08:56:12 GMT
- Title: A Simple Unified Approach to Testing High-Dimensional Conditional
Independences for Categorical and Ordinal Data
- Authors: Ankur Ankan and Johannes Textor
- Abstract summary: Conditional independence (CI) tests underlie many approaches to model testing and structure learning in causal inference.
Most existing CI tests for categorical and ordinal data stratify the sample by the conditioning variables, perform simple independence tests in each stratum, and combine the results.
Here we propose a simple unified CI test for ordinal and categorical data that maintains reasonable calibration and power in high dimensions.
- Score: 0.26651200086513094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conditional independence (CI) tests underlie many approaches to model testing
and structure learning in causal inference. Most existing CI tests for
categorical and ordinal data stratify the sample by the conditioning variables,
perform simple independence tests in each stratum, and combine the results.
Unfortunately, the statistical power of this approach degrades rapidly as the
number of conditioning variables increases. Here we propose a simple unified CI
test for ordinal and categorical data that maintains reasonable calibration and
power in high dimensions. We show that our test outperforms existing baselines
in model testing and structure learning for dense directed graphical models
while being comparable for sparse models. Our approach could be attractive for
causal model testing because it is easy to implement, can be used with
non-parametric or parametric probability models, has the symmetry property, and
has reasonable computational requirements.
Related papers
- Context-Aware Testing: A New Paradigm for Model Testing with Large Language Models [49.06068319380296]
We introduce context-aware testing (CAT) which uses context as an inductive bias to guide the search for meaningful model failures.
We instantiate the first CAT system, SMART Testing, which employs large language models to hypothesize relevant and likely failures.
arXiv Detail & Related papers (2024-10-31T15:06:16Z) - Testing Causal Models with Hidden Variables in Polynomial Delay via Conditional Independencies [49.99600569996907]
Testing a hypothesized causal model against observational data is a key prerequisite for many causal inference tasks.
While a model can assume exponentially many conditional independence relations (CIs), testing all of them is both impractical and unnecessary.
We introduce c-LMP for causal graphs with hidden variables and develop a delay algorithm to list these CIs in poly-time intervals.
arXiv Detail & Related papers (2024-09-22T21:05:56Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Precise Error Rates for Computationally Efficient Testing [75.63895690909241]
We revisit the question of simple-versus-simple hypothesis testing with an eye towards computational complexity.
An existing test based on linear spectral statistics achieves the best possible tradeoff curve between type I and type II error rates.
arXiv Detail & Related papers (2023-11-01T04:41:16Z) - Deep anytime-valid hypothesis testing [29.273915933729057]
We propose a general framework for constructing powerful, sequential hypothesis tests for nonparametric testing problems.
We develop a principled approach of leveraging the representation capability of machine learning models within the testing-by-betting framework.
Empirical results on synthetic and real-world datasets demonstrate that tests instantiated using our general framework are competitive against specialized baselines.
arXiv Detail & Related papers (2023-10-30T09:46:19Z) - Can You Rely on Your Model Evaluation? Improving Model Evaluation with
Synthetic Test Data [75.20035991513564]
We introduce 3S Testing, a deep generative modeling framework to facilitate model evaluation.
Our experiments demonstrate that 3S Testing outperforms traditional baselines.
These results raise the question of whether we need a paradigm shift away from limited real test data towards synthetic test data.
arXiv Detail & Related papers (2023-10-25T10:18:44Z) - Model-Free Sequential Testing for Conditional Independence via Testing
by Betting [8.293345261434943]
The proposed test allows researchers to analyze an incoming i.i.d. data stream with any arbitrary dependency structure.
We allow the processing of data points online as soon as they arrive and stop data acquisition once significant results are detected.
arXiv Detail & Related papers (2022-10-01T20:05:33Z) - Learning to Increase the Power of Conditional Randomization Tests [8.883733362171032]
The model-X conditional randomization test is a generic framework for conditional independence testing.
We introduce novel model-fitting schemes that are designed to explicitly improve the power of model-X tests.
arXiv Detail & Related papers (2022-07-03T12:29:25Z) - Composite Goodness-of-fit Tests with Kernels [19.744607024807188]
We propose a kernel-based hypothesis tests for the challenging composite testing problem.
Our tests make use of minimum distance estimators based on the maximum mean discrepancy and the kernel Stein discrepancy.
As our main result, we show that we are able to estimate the parameter and conduct our test on the same data, while maintaining a correct test level.
arXiv Detail & Related papers (2021-11-19T15:25:06Z) - Testing Directed Acyclic Graph via Structural, Supervised and Generative
Adversarial Learning [7.623002328386318]
We propose a new hypothesis testing method for directed acyclic graph (DAG)
We build the test based on some highly flexible neural networks learners.
We demonstrate the efficacy of the test through simulations and a brain connectivity network analysis.
arXiv Detail & Related papers (2021-06-02T21:18:59Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.