Neural Frailty Machine: Beyond proportional hazard assumption in neural
survival regressions
- URL: http://arxiv.org/abs/2303.10358v2
- Date: Wed, 4 Oct 2023 05:49:41 GMT
- Title: Neural Frailty Machine: Beyond proportional hazard assumption in neural
survival regressions
- Authors: Ruofan Wu, Jiawei Qiao, Mingzhe Wu, Wen Yu, Ming Zheng, Tengfei Liu,
Tianyi Zhang, Weiqiang Wang
- Abstract summary: We present neural frailty machine (NFM), a powerful and flexible neural modeling framework for survival regressions.
Two concrete models are derived under the framework that extends neural proportional hazard models and non hazard regression models.
We conduct experimental evaluations over $6$ benchmark datasets of different scales, showing that the proposed NFM models outperform state-of-the-art survival models in terms of predictive performance.
- Score: 30.018173329118184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present neural frailty machine (NFM), a powerful and flexible neural
modeling framework for survival regressions. The NFM framework utilizes the
classical idea of multiplicative frailty in survival analysis to capture
unobserved heterogeneity among individuals, at the same time being able to
leverage the strong approximation power of neural architectures for handling
nonlinear covariate dependence. Two concrete models are derived under the
framework that extends neural proportional hazard models and nonparametric
hazard regression models. Both models allow efficient training under the
likelihood objective. Theoretically, for both proposed models, we establish
statistical guarantees of neural function approximation with respect to
nonparametric components via characterizing their rate of convergence.
Empirically, we provide synthetic experiments that verify our theoretical
statements. We also conduct experimental evaluations over $6$ benchmark
datasets of different scales, showing that the proposed NFM models outperform
state-of-the-art survival models in terms of predictive performance. Our code
is publicly availabel at https://github.com/Rorschach1989/nfm
Related papers
- How Inverse Conditional Flows Can Serve as a Substitute for Distributional Regression [2.9873759776815527]
We propose a framework for distributional regression using inverse flow transformations (DRIFT)
DRIFT covers both interpretable statistical models and flexible neural networks opening up new avenues in both statistical modeling and deep learning.
arXiv Detail & Related papers (2024-05-08T21:19:18Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - The Surprising Harmfulness of Benign Overfitting for Adversarial
Robustness [13.120373493503772]
We prove a surprising result that even if the ground truth itself is robust to adversarial examples, the benignly overfitted model is benign in terms of the standard'' out-of-sample risk objective.
Our finding provides theoretical insights into the puzzling phenomenon observed in practice, where the true target function (e.g., human) is robust against adverasrial attack, while beginly overfitted neural networks lead to models that are not robust.
arXiv Detail & Related papers (2024-01-19T15:40:46Z) - Exploring hyperelastic material model discovery for human brain cortex:
multivariate analysis vs. artificial neural network approaches [10.003764827561238]
This study aims to identify the most favorable material model for human brain tissue.
We apply artificial neural network and multiple regression methods to a generalization of widely accepted classic models.
arXiv Detail & Related papers (2023-10-16T18:49:59Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Robust Neural Posterior Estimation and Statistical Model Criticism [1.5749416770494706]
We argue that modellers must treat simulators as idealistic representations of the true data generating process.
In this work we revisit neural posterior estimation (NPE), a class of algorithms that enable black-box parameter inference in simulation models.
We find that the presence of misspecification, in contrast, leads to unreliable inference when NPE is used naively.
arXiv Detail & Related papers (2022-10-12T20:06:55Z) - Nonparametric likelihood-free inference with Jensen-Shannon divergence
for simulator-based models with categorical output [1.4298334143083322]
Likelihood-free inference for simulator-based statistical models has attracted a surge of interest, both in the machine learning and statistics communities.
Here we derive a set of theoretical results to enable estimation, hypothesis testing and construction of confidence intervals for model parameters using computation properties of the Jensen-Shannon- divergence.
Such approximation offers a rapid alternative to more-intensive approaches and can be attractive for diverse applications of simulator-based models.
arXiv Detail & Related papers (2022-05-22T18:00:13Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.