Experimental Comparison of Semi-parametric, Parametric, and Machine
Learning Models for Time-to-Event Analysis Through the Concordance Index
- URL: http://arxiv.org/abs/2003.08820v1
- Date: Fri, 13 Mar 2020 07:18:14 GMT
- Title: Experimental Comparison of Semi-parametric, Parametric, and Machine
Learning Models for Time-to-Event Analysis Through the Concordance Index
- Authors: Camila Fernandez (LINCS), Chung Shue Chen (LINCS), Pierre Gaillard
(SIERRA), Alonso Silva
- Abstract summary: We make an experimental comparison of semi-parametric (Cox proportional hazards model, Aalen's additive regression model), parametric (Weibull AFT model), and machine learning models (Random Survival Forest, Gradient Boosting with Cox Proportional Hazards Loss, DeepSurv)
- Score: 1.5749416770494706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we make an experimental comparison of semi-parametric (Cox
proportional hazards model, Aalen's additive regression model), parametric
(Weibull AFT model), and machine learning models (Random Survival Forest,
Gradient Boosting with Cox Proportional Hazards Loss, DeepSurv) through the
concordance index on two different datasets (PBC and GBCSG2). We present two
comparisons: one with the default hyper-parameters of these models and one with
the best hyper-parameters found by randomized search.
Related papers
- Refereeing the Referees: Evaluating Two-Sample Tests for Validating Generators in Precision Sciences [0.0]
One-dimensional tests provide a level of sensitivity comparable to other multivariate metrics, but with significantly lower computational cost.
This methodology offers an efficient, standardized tool for model comparison and can serve as a benchmark for more advanced tests.
arXiv Detail & Related papers (2024-09-24T13:58:46Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Machine Learning-Driven Optimization of TPMS Architected Materials Using Simulated Annealing [0.0]
The research paper presents a novel approach to optimizing the tensile stress of Triply Periodic Minimal Surface (TPMS) structures through machine learning and Simulated Annealing (SA)
The study evaluates the performance of Random Forest, Decision Tree, and XGBoost models in predicting tensile stress, using a dataset generated from finite element analysis of TPMS models.
arXiv Detail & Related papers (2024-05-28T05:06:37Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Understanding Parameter Sharing in Transformers [53.75988363281843]
Previous work on Transformers has focused on sharing parameters in different layers, which can improve the performance of models with limited parameters by increasing model depth.
We show that the success of this approach can be largely attributed to better convergence, with only a small part due to the increased model complexity.
Experiments on 8 machine translation tasks show that our model achieves competitive performance with only half the model complexity of parameter sharing models.
arXiv Detail & Related papers (2023-06-15T10:48:59Z) - Towards Convergence Rates for Parameter Estimation in Gaussian-gated
Mixture of Experts [40.24720443257405]
We provide a convergence analysis for maximum likelihood estimation (MLE) in the Gaussian-gated MoE model.
Our findings reveal that the MLE has distinct behaviors under two complement settings of location parameters of the Gaussian gating functions.
Notably, these behaviors can be characterized by the solvability of two different systems of equations.
arXiv Detail & Related papers (2023-05-12T16:02:19Z) - The Infinitesimal Jackknife and Combinations of Models [2.457924087844968]
We extend the Infinitesimal Jackknife to estimate the covariance between any two models.
This can be used to quantify uncertainty for combinations of models, or to construct test statistics for comparing different models.
arXiv Detail & Related papers (2022-08-31T22:37:44Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.