Interpretable Additive Recurrent Neural Networks For Multivariate
Clinical Time Series
- URL: http://arxiv.org/abs/2109.07602v1
- Date: Wed, 15 Sep 2021 22:30:19 GMT
- Title: Interpretable Additive Recurrent Neural Networks For Multivariate
Clinical Time Series
- Authors: Asif Rahman, Yale Chang, Jonathan Rubin
- Abstract summary: We present the Interpretable-RNN (I-RNN) that balances model complexity and accuracy by forcing the relationship between variables in the model to be additive.
I-RNN specifically captures the unique characteristics of clinical time series, which are unevenly sampled in time, asynchronously acquired, and have missing data.
We evaluate the I-RNN model on the Physionet 2012 Challenge dataset to predict in-hospital mortality, and on a real-world clinical decision support task: predicting hemodynamic interventions in the intensive care unit.
- Score: 4.125698836261585
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Time series models with recurrent neural networks (RNNs) can have high
accuracy but are unfortunately difficult to interpret as a result of
feature-interactions, temporal-interactions, and non-linear transformations.
Interpretability is important in domains like healthcare where constructing
models that provide insight into the relationships they have learned are
required to validate and trust model predictions. We want accurate time series
models where users can understand the contribution of individual input
features. We present the Interpretable-RNN (I-RNN) that balances model
complexity and accuracy by forcing the relationship between variables in the
model to be additive. Interactions are restricted between hidden states of the
RNN and additively combined at the final step. I-RNN specifically captures the
unique characteristics of clinical time series, which are unevenly sampled in
time, asynchronously acquired, and have missing data. Importantly, the hidden
state activations represent feature coefficients that correlate with the
prediction target and can be visualized as risk curves that capture the global
relationship between individual input features and the outcome. We evaluate the
I-RNN model on the Physionet 2012 Challenge dataset to predict in-hospital
mortality, and on a real-world clinical decision support task: predicting
hemodynamic interventions in the intensive care unit. I-RNN provides
explanations in the form of global and local feature importances comparable to
highly intelligible models like decision trees trained on hand-engineered
features while significantly outperforming them. I-RNN remains intelligible
while providing accuracy comparable to state-of-the-art decay-based and
interpolation-based recurrent time series models. The experimental results on
real-world clinical datasets refute the myth that there is a tradeoff between
accuracy and interpretability.
Related papers
- Explainable Spatio-Temporal GCNNs for Irregular Multivariate Time Series: Architecture and Application to ICU Patient Data [7.433698348783128]
We present XST-CNN (eXG-Temporal Graph Conal Neural Network), a novel architecture for processing heterogeneous and irregular Multi Time Series (MTS) data.
Our approach captures temporal and feature within a unifiedtemporal-temporal pipeline by leveraging a GCNN pipeline.
We evaluate XST-CNN using real-world Electronic Health Record data to predict Multidrug Resistance (MDR) in ICU patients.
arXiv Detail & Related papers (2024-11-01T22:53:17Z) - Probabilistic Neural Networks (PNNs) for Modeling Aleatoric Uncertainty
in Scientific Machine Learning [2.348041867134616]
This paper investigates the use of probabilistic neural networks (PNNs) to model aleatoric uncertainty.
PNNs generate probability distributions for the target variable, allowing the determination of both predicted means and intervals in regression scenarios.
In a real-world scientific machine learning context, PNNs yield remarkably accurate output mean estimates with R-squared scores approaching 0.97, and their predicted intervals exhibit a high correlation coefficient of nearly 0.80.
arXiv Detail & Related papers (2024-02-21T17:15:47Z) - Uncertainty-Aware Deep Attention Recurrent Neural Network for
Heterogeneous Time Series Imputation [0.25112747242081457]
Missingness is ubiquitous in multivariate time series and poses an obstacle to reliable downstream analysis.
We propose DEep Attention Recurrent Imputation (Imputation), which jointly estimates missing values and their associated uncertainty.
Experiments show that I surpasses the SOTA in diverse imputation tasks using real-world datasets.
arXiv Detail & Related papers (2024-01-04T13:21:11Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - ARM-Net: Adaptive Relation Modeling Network for Structured Data [29.94433633729326]
ARM-Net is an adaptive relation modeling network tailored for structured data and a lightweight framework ARMOR based on ARM-Net for relational data.
We show that ARM-Net consistently outperforms existing models and provides more interpretable predictions for datasets.
arXiv Detail & Related papers (2021-07-05T07:37:24Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.