Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event
Data
- URL: http://arxiv.org/abs/2008.08397v2
- Date: Wed, 26 Aug 2020 18:13:45 GMT
- Title: Kernelized Stein Discrepancy Tests of Goodness-of-fit for Time-to-Event
Data
- Authors: Tamara Fernandez, Nicolas Rivera, Wenkai Xu and Arthur Gretton
- Abstract summary: We propose a collection of kernelized Stein discrepancy tests for time-to-event data.
Our experimental results show that our proposed methods perform better than existing tests.
- Score: 24.442094864838225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Survival Analysis and Reliability Theory are concerned with the analysis of
time-to-event data, in which observations correspond to waiting times until an
event of interest such as death from a particular disease or failure of a
component in a mechanical system. This type of data is unique due to the
presence of censoring, a type of missing data that occurs when we do not
observe the actual time of the event of interest but, instead, we have access
to an approximation for it given by random interval in which the observation is
known to belong. Most traditional methods are not designed to deal with
censoring, and thus we need to adapt them to censored time-to-event data. In
this paper, we focus on non-parametric goodness-of-fit testing procedures based
on combining the Stein's method and kernelized discrepancies. While for
uncensored data, there is a natural way of implementing a kernelized Stein
discrepancy test, for censored data there are several options, each of them
with different advantages and disadvantages. In this paper, we propose a
collection of kernelized Stein discrepancy tests for time-to-event data, and we
study each of them theoretically and empirically; our experimental results show
that our proposed methods perform better than existing tests, including
previous tests based on a kernelized maximum mean discrepancy.
Related papers
- Sequential Kernelized Stein Discrepancy [34.773470589069476]
We exploit the potential boundedness of the Stein kernel at arbitrary point evaluations to define test martingales.
We prove the validity of the test, as well as an lower bound for the logarithmic growth of the wealth process under the alternative.
arXiv Detail & Related papers (2024-09-26T03:24:59Z) - Meta-Analysis with Untrusted Data [14.28797726638936]
We show how to answer causal questions much more precisely by making two changes to meta-analysis.
First, we incorporate untrusted data drawn from large observational databases.
Second, we train richer models capable of handling heterogeneous trials.
arXiv Detail & Related papers (2024-07-12T16:07:53Z) - Benchmarking Observational Studies with Experimental Data under
Right-Censoring [18.768537827004536]
We consider two cases where censoring time is independent of time-to-event.
We show that the same test can still be used even though unbiased CATE estimation may not be possible.
We verify the effectiveness of our censoring-aware tests via semi-synthetic experiments and analyze RCT and OS data from the Women's Health Initiative study.
arXiv Detail & Related papers (2024-02-23T06:44:13Z) - CenTime: Event-Conditional Modelling of Censoring in Survival Analysis [49.44664144472712]
We introduce CenTime, a novel approach to survival analysis that directly estimates the time to event.
Our method features an innovative event-conditional censoring mechanism that performs robustly even when uncensored data is scarce.
Our results indicate that CenTime offers state-of-the-art performance in predicting time-to-death while maintaining comparable ranking performance.
arXiv Detail & Related papers (2023-09-07T17:07:33Z) - Sequential Predictive Two-Sample and Independence Testing [114.4130718687858]
We study the problems of sequential nonparametric two-sample and independence testing.
We build upon the principle of (nonparametric) testing by betting.
arXiv Detail & Related papers (2023-04-29T01:30:33Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z) - Stable Prediction via Leveraging Seed Variable [73.9770220107874]
Previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction.
We propose a conditional independence test based algorithm to separate causal variables with a seed variable as priori, and adopt them for stable prediction.
Our algorithm outperforms state-of-the-art methods for stable prediction.
arXiv Detail & Related papers (2020-06-09T06:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.