A Multiple kernel testing procedure for non-proportional hazards in
factorial designs
- URL: http://arxiv.org/abs/2206.07239v1
- Date: Wed, 15 Jun 2022 01:53:49 GMT
- Title: A Multiple kernel testing procedure for non-proportional hazards in
factorial designs
- Authors: Marc Ditzhaus and Tamara Fern\'andez and Nicol\'as Rivera
- Abstract summary: We propose a Multiple kernel testing procedure to infer survival data when several factors are of interest simultaneously.
Our method is able to deal with complex data and can be seen as an alternative to the omnipresent Cox model when assumptions such as proportionality cannot be justified.
- Score: 4.358626952482687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a Multiple kernel testing procedure to infer
survival data when several factors (e.g. different treatment groups, gender,
medical history) and their interaction are of interest simultaneously. Our
method is able to deal with complex data and can be seen as an alternative to
the omnipresent Cox model when assumptions such as proportionality cannot be
justified. Our methodology combines well-known concepts from Survival Analysis,
Machine Learning and Multiple Testing: differently weighted log-rank tests,
kernel methods and multiple contrast tests. By that, complex hazard
alternatives beyond the classical proportional hazard set-up can be detected.
Moreover, multiple comparisons are performed by fully exploiting the dependence
structure of the single testing procedures to avoid a loss of power. In all,
this leads to a flexible and powerful procedure for factorial survival designs
whose theoretical validity is proven by martingale arguments and the theory for
$V$-statistics. We evaluate the performance of our method in an extensive
simulation study and illustrate it by a real data analysis.
Related papers
- Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Scalable Randomized Kernel Methods for Multiview Data Integration and
Prediction [4.801208484529834]
We develop scalable randomized kernel methods for jointly associating data from multiple sources and simultaneously predicting an outcome or classifying a unit into one of two or more classes.
The proposed methods model nonlinear relationships in multiview data together with predicting a clinical outcome and are capable of identifying variables or groups of variables that best contribute to the relationships among the views.
arXiv Detail & Related papers (2023-04-10T16:14:42Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Nonparametric Conditional Local Independence Testing [69.31200003384122]
Conditional local independence is an independence relation among continuous time processes.
No nonparametric test of conditional local independence has been available.
We propose such a nonparametric test based on double machine learning.
arXiv Detail & Related papers (2022-03-25T10:31:02Z) - Evaluating Causal Inference Methods [0.4588028371034407]
We introduce a deep generative model-based framework, Credence, to validate causal inference methods.
Our work introduces a deep generative model-based framework, Credence, to validate causal inference methods.
arXiv Detail & Related papers (2022-02-09T00:21:22Z) - Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis [5.064097093575691]
We propose a new hypothesis testing approach based on the conditional randomization test.
Our methodology is solely based on the randomization of factors, and hence is free from assumptions.
arXiv Detail & Related papers (2022-01-20T18:23:12Z) - A Quantitative Comparison of Epistemic Uncertainty Maps Applied to
Multi-Class Segmentation [0.0]
This paper highlights a systematic approach to define and quantitatively compare those methods in two different contexts.
We applied this analysis to a multi-class segmentation of the carotid artery lumens and vessel wall, on a multi-center, multi-scanner, multi-sequence dataset.
We made a python package available to reproduce our analysis on different data and tasks.
arXiv Detail & Related papers (2021-09-22T12:48:19Z) - Testing Directed Acyclic Graph via Structural, Supervised and Generative
Adversarial Learning [7.623002328386318]
We propose a new hypothesis testing method for directed acyclic graph (DAG)
We build the test based on some highly flexible neural networks learners.
We demonstrate the efficacy of the test through simulations and a brain connectivity network analysis.
arXiv Detail & Related papers (2021-06-02T21:18:59Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Marginal likelihood computation for model selection and hypothesis
testing: an extensive review [66.37504201165159]
This article provides a comprehensive study of the state-of-the-art of the topic.
We highlight limitations, benefits, connections and differences among the different techniques.
Problems and possible solutions with the use of improper priors are also described.
arXiv Detail & Related papers (2020-05-17T18:31:58Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.