SurvSHAP(t): Time-dependent explanations of machine learning survival
models
- URL: http://arxiv.org/abs/2208.11080v1
- Date: Tue, 23 Aug 2022 17:01:14 GMT
- Title: SurvSHAP(t): Time-dependent explanations of machine learning survival
models
- Authors: Mateusz Krzyzi\'nski, Miko{\l}aj Spytek, Hubert Baniecki,
Przemys{\l}aw Biecek
- Abstract summary: We introduce SurvSHAP(t), the first time-dependent explanation that allows for interpreting survival black-box models.
Experiments on synthetic and medical data confirm that SurvSHAP(t) can detect variables with a time-dependent effect.
We provide an accessible implementation of time-dependent explanations in Python.
- Score: 6.950862982117125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine and deep learning survival models demonstrate similar or even
improved time-to-event prediction capabilities compared to classical
statistical learning methods yet are too complex to be interpreted by humans.
Several model-agnostic explanations are available to overcome this issue;
however, none directly explain the survival function prediction. In this paper,
we introduce SurvSHAP(t), the first time-dependent explanation that allows for
interpreting survival black-box models. It is based on SHapley Additive
exPlanations with solid theoretical foundations and a broad adoption among
machine learning practitioners. The proposed methods aim to enhance precision
diagnostics and support domain experts in making decisions. Experiments on
synthetic and medical data confirm that SurvSHAP(t) can detect variables with a
time-dependent effect, and its aggregation is a better determinant of the
importance of variables for a prediction than SurvLIME. SurvSHAP(t) is
model-agnostic and can be applied to all models with functional output. We
provide an accessible implementation of time-dependent explanations in Python
at http://github.com/MI2DataLab/survshap .
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Interpretable Prediction and Feature Selection for Survival Analysis [18.987678432106563]
We present DyS (pronounced dice''), a new survival analysis model that achieves both strong discrimination and interpretability.
DyS is a feature-sparse Generalized Additive Model, combining feature selection and interpretable prediction into one model.
arXiv Detail & Related papers (2024-04-23T02:36:54Z) - Interpreting Differentiable Latent States for Healthcare Time-series
Data [4.581930518669275]
We present a concise algorithm that allows for interpreting latent states using highly related input features.
We demonstrate this approach enables the identification of a daytime behavioral pattern for predicting nocturnal behavior in a real-world healthcare dataset.
arXiv Detail & Related papers (2023-11-29T11:48:16Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - TsSHAP: Robust model agnostic feature-based explainability for time
series forecasting [6.004928390125367]
We propose a feature-based explainability algorithm, TsSHAP, that can explain the forecast of any black-box forecasting model.
We formalize the notion of local, semi-local, and global explanations in the context of time series forecasting.
arXiv Detail & Related papers (2023-03-22T05:14:36Z) - Hypothesis Testing and Machine Learning: Interpreting Variable Effects
in Deep Artificial Neural Networks using Cohen's f2 [0.0]
Deep artificial neural networks show high predictive performance in many fields.
But they do not afford statistical inferences and their black-box operations are too complicated for humans to comprehend.
This article extends current XAI methods and develops a model agnostic hypothesis testing framework for machine learning.
arXiv Detail & Related papers (2023-02-02T20:43:37Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Towards a Rigorous Evaluation of Explainability for Multivariate Time
Series [5.786452383826203]
This study was to achieve and evaluate model agnostic explainability in a time series forecasting problem.
The solution involved framing the problem as a time series forecasting problem to predict the sales deals.
The explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model.
arXiv Detail & Related papers (2021-04-06T17:16:36Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Evaluating Explainable AI: Which Algorithmic Explanations Help Users
Predict Model Behavior? [97.77183117452235]
We carry out human subject tests to isolate the effect of algorithmic explanations on model interpretability.
Clear evidence of method effectiveness is found in very few cases.
Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability.
arXiv Detail & Related papers (2020-05-04T20:35:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.