Neural Network-Based Piecewise Survival Models
- URL: http://arxiv.org/abs/2403.18664v1
- Date: Wed, 27 Mar 2024 15:08:00 GMT
- Title: Neural Network-Based Piecewise Survival Models
- Authors: Olov Holmer, Erik Frisk, Mattias Krysander,
- Abstract summary: A family of neural network-based survival models is presented.
The models can be seen as an extension of the commonly used discrete-time and piecewise exponential models.
- Score: 0.3999851878220878
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, a family of neural network-based survival models is presented. The models are specified based on piecewise definitions of the hazard function and the density function on a partitioning of the time; both constant and linear piecewise definitions are presented, resulting in a family of four models. The models can be seen as an extension of the commonly used discrete-time and piecewise exponential models and thereby add flexibility to this set of standard models. Using a simulated dataset the models are shown to perform well compared to the highly expressive, state-of-the-art energy-based model, while only requiring a fraction of the computation time.
Related papers
- Deep Koopman-layered Model with Universal Property Based on Toeplitz Matrices [26.96258010698567]
The proposed model has both theoretical solidness and flexibility.
The flexibility of the proposed model enables the model to fit time-series data coming from nonautonomous dynamical systems.
arXiv Detail & Related papers (2024-10-03T04:27:46Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Likelihood Based Inference in Fully and Partially Observed Exponential Family Graphical Models with Intractable Normalizing Constants [4.532043501030714]
Probabilistic graphical models that encode an underlying Markov random field are fundamental building blocks of generative modeling.
This paper is to demonstrate that full likelihood based analysis of these models is feasible in a computationally efficient manner.
arXiv Detail & Related papers (2024-04-27T02:58:22Z) - Artificial neural networks and time series of counts: A class of
nonlinear INGARCH models [0.0]
It is shown how INGARCH models can be combined with artificial neural network (ANN) response functions to obtain a class of nonlinear INGARCH models.
The ANN framework allows for the interpretation of many existing INGARCH models as a degenerate version of a corresponding neural model.
The empirical analysis of time series of bounded and unbounded counts reveals that the neural INGARCH models are able to outperform reasonable degenerate competitor models in terms of the information loss.
arXiv Detail & Related papers (2023-04-03T14:26:16Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Neural Basis Models for Interpretability [33.51591891812176]
Generalized Additive Models (GAMs) are an inherently interpretable class of models.
We propose an entirely new subfamily of GAMs that utilize basis decomposition of shape functions.
A small number of basis functions are shared among all features, and are learned jointly for a given task.
arXiv Detail & Related papers (2022-05-27T17:31:19Z) - Low-Rank Constraints for Fast Inference in Structured Models [110.38427965904266]
This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models.
Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces.
arXiv Detail & Related papers (2022-01-08T00:47:50Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.