SurvReLU: Inherently Interpretable Survival Analysis via Deep ReLU Networks
- URL: http://arxiv.org/abs/2407.14463v2
- Date: Thu, 15 Aug 2024 04:07:25 GMT
- Title: SurvReLU: Inherently Interpretable Survival Analysis via Deep ReLU Networks
- Authors: Xiaotong Sun, Peijie Qiu, Shengfan Zhang,
- Abstract summary: We bridge the gap between deep survival models and traditional tree-based survival models through deep rectified linear unit (ReLU) networks.
We show that a deliberately constructed deep ReLU network (SurvReLU) can harness the interpretability of tree-based structures with the representational power of deep survival models.
- Score: 0.8520624117635326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Survival analysis models time-to-event distributions with censorship. Recently, deep survival models using neural networks have dominated due to their representational power and state-of-the-art performance. However, their "black-box" nature hinders interpretability, which is crucial in real-world applications. In contrast, "white-box" tree-based survival models offer better interpretability but struggle to converge to global optima due to greedy expansion. In this paper, we bridge the gap between previous deep survival models and traditional tree-based survival models through deep rectified linear unit (ReLU) networks. We show that a deliberately constructed deep ReLU network (SurvReLU) can harness the interpretability of tree-based structures with the representational power of deep survival models. Empirical studies on both simulated and real survival benchmark datasets show the effectiveness of the proposed SurvReLU in terms of performance and interoperability. The code is available at \href{https://github.com/xs018/SurvReLU}{\color{magenta}{ https://github.com/xs018/SurvReLU}}.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - NSOTree: Neural Survival Oblique Tree [0.21756081703275998]
Survival analysis is a statistical method employed to scrutinize the duration until a specific event of interest transpires.
Deep learning-based methods have dominated this field due to their representational capacity and state-of-the-art performance.
In this paper, we leverage the strengths of both neural networks and tree-based methods, capitalizing on their ability to approximate intricate functions while maintaining interpretability.
arXiv Detail & Related papers (2023-09-25T02:14:15Z) - Bort: Towards Explainable Neural Networks with Bounded Orthogonal
Constraint [90.69718495533144]
We introduce Bort, an algorithm for improving model explainability.
Based on Bort, we are able to synthesize explainable adversarial samples without additional parameters and training.
We find Bort constantly improves the classification accuracy of various architectures including ResNet and DeiT on MNIST, CIFAR-10, and ImageNet.
arXiv Detail & Related papers (2022-12-18T11:02:50Z) - Finding Differences Between Transformers and ConvNets Using
Counterfactual Simulation Testing [82.67716657524251]
We present a counterfactual framework that allows us to study the robustness of neural networks with respect to naturalistic variations.
Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers.
arXiv Detail & Related papers (2022-11-29T18:59:23Z) - Revisiting Sparse Convolutional Model for Visual Recognition [40.726494290922204]
This paper revisits the sparse convolutional modeling for image classification.
We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100, and ImageNet datasets.
arXiv Detail & Related papers (2022-10-24T04:29:21Z) - Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers [23.42376264664302]
Anti-perturbation ability of a neural network mainly relies on two factors: model capacity and anti-perturbation ability.
We study the anti-perturbation ability of the network from the feature maps of convolutional layers.
Non-trivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms.
arXiv Detail & Related papers (2022-02-25T00:14:59Z) - Deriving Explanation of Deep Visual Saliency Models [6.808418311272862]
We develop a technique to derive explainable saliency models from their corresponding deep neural architecture based saliency models.
We consider two state-of-the-art deep saliency models, namely UNISAL and MSI-Net for our interpretation.
We also build our own deep saliency model named cross-concatenated multi-scale residual block based network (CMRNet) for saliency prediction.
arXiv Detail & Related papers (2021-09-08T12:22:32Z) - Leveraging Sparse Linear Layers for Debuggable Deep Networks [86.94586860037049]
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.
The resulting sparse explanations can help to identify spurious correlations, explain misclassifications, and diagnose model biases in vision and language tasks.
arXiv Detail & Related papers (2021-05-11T08:15:25Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.