Experimental Comparison of Ensemble Methods and Time-to-Event Analysis
Models Through Integrated Brier Score and Concordance Index
- URL: http://arxiv.org/abs/2403.07460v1
- Date: Tue, 12 Mar 2024 09:57:45 GMT
- Title: Experimental Comparison of Ensemble Methods and Time-to-Event Analysis
Models Through Integrated Brier Score and Concordance Index
- Authors: Camila Fernandez (LPSM), Chung Shue Chen, Chen Pierre Gaillard, Alonso
Silva
- Abstract summary: We review and compare the performance of several prediction models for time-to-event analysis.
We show how ensemble methods, which surprisingly have not yet been much studied in time-to-event analysis, can improve the prediction accuracy and enhance the robustness of the prediction performance.
- Score: 2.3411633024711573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-to-event analysis is a branch of statistics that has increased in
popularity during the last decades due to its many application fields, such as
predictive maintenance, customer churn prediction and population lifetime
estimation. In this paper, we review and compare the performance of several
prediction models for time-to-event analysis. These consist of semi-parametric
and parametric statistical models, in addition to machine learning approaches.
Our study is carried out on three datasets and evaluated in two different
scores (the integrated Brier score and concordance index). Moreover, we show
how ensemble methods, which surprisingly have not yet been much studied in
time-to-event analysis, can improve the prediction accuracy and enhance the
robustness of the prediction performance. We conclude the analysis with a
simulation experiment in which we evaluate the factors influencing the
performance ranking of the methods using both scores.
Related papers
- Failure Risk Prediction in a MOOC: A Multivariate Time Series Analysis Approach [0.7087237546722617]
This work compares time series classification methods to identify at-risk learners at different stages of the course.<n>Preliminary results show that the evaluated approaches are promising for predicting learner failure in MOOCs.
arXiv Detail & Related papers (2025-07-17T12:22:10Z) - Ranking and Combining Latent Structured Predictive Scores without Labeled Data [2.5064967708371553]
This paper introduces a novel structured unsupervised ensemble learning model (SUEL)
It exploits the dependency between a set of predictors with continuous predictive scores, rank the predictors without labeled data and combine them to an ensembled score with weights.
The efficacy of the proposed methods is rigorously assessed through both simulation studies and real-world application of risk genes discovery.
arXiv Detail & Related papers (2024-08-14T20:14:42Z) - Forecasting with Deep Learning: Beyond Average of Average of Average Performance [0.393259574660092]
Current practices for evaluating and comparing forecasting models focus on summarising performance into a single score.
We propose a novel framework for evaluating models from multiple perspectives.
We show the advantages of this framework by comparing a state-of-the-art deep learning approach with classical forecasting techniques.
arXiv Detail & Related papers (2024-06-24T12:28:22Z) - Prediction of Dilatory Behavior in eLearning: A Comparison of Multiple
Machine Learning Models [0.2963240482383777]
Procrastination, the irrational delay of tasks, is a common occurrence in online learning.
Research focusing on such predictions is scarce.
Studies involving different types of predictors and comparisons between the predictive performance of various methods are virtually non-existent.
arXiv Detail & Related papers (2022-06-30T07:24:08Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Expected Validation Performance and Estimation of a Random Variable's
Maximum [48.83713377993604]
We analyze three statistical estimators for expected validation performance.
We find the unbiased estimator has the highest variance, and the estimator with the smallest variance has the largest bias.
We find that the two biased estimators lead to the fewest incorrect conclusions.
arXiv Detail & Related papers (2021-10-01T18:48:47Z) - Model Selection for Time Series Forecasting: Empirical Analysis of
Different Estimators [1.6328866317851185]
We compare a set of estimation methods for model selection in time series forecasting tasks.
We empirically found that the accuracy of the estimators for selecting the best solution is low.
Some factors, such as the sample size, are important in the relative performance of the estimators.
arXiv Detail & Related papers (2021-04-01T16:08:25Z) - A Statistical Analysis of Summarization Evaluation Metrics using
Resampling Methods [60.04142561088524]
We find that the confidence intervals are rather wide, demonstrating high uncertainty in how reliable automatic metrics truly are.
Although many metrics fail to show statistical improvements over ROUGE, two recent works, QAEval and BERTScore, do in some evaluation settings.
arXiv Detail & Related papers (2021-03-31T18:28:14Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.