Case-Base Neural Networks: survival analysis with time-varying,
higher-order interactions
- URL: http://arxiv.org/abs/2301.06535v4
- Date: Tue, 9 Jan 2024 23:01:46 GMT
- Title: Case-Base Neural Networks: survival analysis with time-varying,
higher-order interactions
- Authors: Jesse Islam, Maxime Turgeon, Robert Sladek, Sahir Bhatnagar
- Abstract summary: We propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures.
CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function.
Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes.
- Score: 0.20482269513546458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of survival analysis, data-driven neural network-based methods
have been developed to model complex covariate effects. While these methods may
provide better predictive performance than regression-based approaches, not all
can model time-varying interactions and complex baseline hazards. To address
this, we propose Case-Base Neural Networks (CBNNs) as a new approach that
combines the case-base sampling framework with flexible neural network
architectures. Using a novel sampling scheme and data augmentation to naturally
account for censoring, we construct a feed-forward neural network that includes
time as an input. CBNNs predict the probability of an event occurring at a
given moment to estimate the full hazard function. We compare the performance
of CBNNs to regression and neural network-based survival methods in a
simulation and three case studies using two time-dependent metrics. First, we
examine performance on a simulation involving a complex baseline hazard and
time-varying interactions to assess all methods, with CBNN outperforming
competitors. Then, we apply all methods to three real data applications, with
CBNNs outperforming the competing models in two studies and showing similar
performance in the third. Our results highlight the benefit of combining
case-base sampling with deep learning to provide a simple and flexible
framework for data-driven modeling of single event survival outcomes that
estimates time-varying effects and a complex baseline hazard by design. An R
package is available at https://github.com/Jesse-Islam/cbnn.
Related papers
- Time Elastic Neural Networks [2.1756081703276]
We introduce and detail an atypical neural network architecture, called time elastic neural network (teNN)
The novelty compared to classical neural network architecture is that it explicitly incorporates time warping ability.
We demonstrate that, during the training process, the teNN succeeds in reducing the number of neurons required within each cell.
arXiv Detail & Related papers (2024-05-27T09:01:30Z) - EAS-SNN: End-to-End Adaptive Sampling and Representation for Event-based Detection with Recurrent Spiking Neural Networks [14.046487518350792]
Spiking Neural Networks (SNNs) operate on an event-driven through sparse spike communication.
We introduce Residual Potential Dropout (RPD) and Spike-Aware Training (SAT) to regulate potential distribution.
Our method yields a 4.4% mAP improvement on the Gen1 dataset, while requiring 38% fewer parameters and only three time steps.
arXiv Detail & Related papers (2024-03-19T09:34:11Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - SPP-CNN: An Efficient Framework for Network Robustness Prediction [13.742495880357493]
This paper develops an efficient framework for network robustness prediction, the spatial pyramid pooling convolutional neural network (SPP-CNN)
The new framework installs a spatial pyramid pooling layer between the convolutional and fully-connected layers, overcoming the common mismatch issue in the CNN-based prediction approaches.
arXiv Detail & Related papers (2023-05-13T09:09:20Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Error-feedback stochastic modeling strategy for time series forecasting
with convolutional neural networks [11.162185201961174]
We propose a novel Error-feedback Modeling (ESM) strategy to construct a random Convolutional Network (ESM-CNN) Neural time series forecasting task.
The proposed ESM-CNN not only outperforms the state-of-art random neural networks, but also exhibits stronger predictive power and less computing overhead in comparison to trained state-of-art deep neural network models.
arXiv Detail & Related papers (2020-02-03T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.