survex: an R package for explaining machine learning survival models
- URL: http://arxiv.org/abs/2308.16113v2
- Date: Tue, 21 Nov 2023 15:50:05 GMT
- Title: survex: an R package for explaining machine learning survival models
- Authors: Miko{\l}aj Spytek and Mateusz Krzyzi\'nski and Sophie Hanna Langbein
and Hubert Baniecki and Marvin N. Wright and Przemys{\l}aw Biecek
- Abstract summary: We introduce the survex R package, which provides a framework for explaining any survival model by applying artificial intelligence techniques.
The capabilities of the proposed software encompass understanding and diagnosing survival models, which can lead to their improvement.
- Score: 8.028581359682239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to their flexibility and superior performance, machine learning models
frequently complement and outperform traditional statistical survival models.
However, their widespread adoption is hindered by a lack of user-friendly tools
to explain their internal operations and prediction rationales. To tackle this
issue, we introduce the survex R package, which provides a cohesive framework
for explaining any survival model by applying explainable artificial
intelligence techniques. The capabilities of the proposed software encompass
understanding and diagnosing survival models, which can lead to their
improvement. By revealing insights into the decision-making process, such as
variable effects and importances, survex enables the assessment of model
reliability and the detection of biases. Thus, transparency and responsibility
may be promoted in sensitive areas, such as biomedical research and healthcare
applications.
Related papers
- Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective [111.58315434849047]
robustness of neural information retrieval models (IR) models has garnered significant attention.
We view the robustness of IR to be a multifaceted concept, emphasizing its necessity against adversarial attacks, out-of-distribution (OOD) scenarios and performance variance.
We provide an in-depth discussion of existing methods, datasets, and evaluation metrics, shedding light on challenges and future directions in the era of large language models.
arXiv Detail & Related papers (2024-07-09T16:07:01Z) - LaPLACE: Probabilistic Local Model-Agnostic Causal Explanations [1.0370398945228227]
We introduce LaPLACE-explainer, designed to provide probabilistic cause-and-effect explanations for machine learning models.
The LaPLACE-Explainer component leverages the concept of a Markov blanket to establish statistical boundaries between relevant and non-relevant features.
Our approach offers causal explanations and outperforms LIME and SHAP in terms of local accuracy and consistency of explained features.
arXiv Detail & Related papers (2023-10-01T04:09:59Z) - QXAI: Explainable AI Framework for Quantitative Analysis in Patient
Monitoring Systems [9.29069202652354]
An Explainable AI for Quantitative analysis (QXAI) framework is proposed with post-hoc model explainability and intrinsic explainability for regression and classification tasks.
We adopted the artificial neural networks (ANN) and attention-based Bidirectional LSTM (BiLSTM) models for the prediction of heart rate and classification of physical activities based on sensor data.
arXiv Detail & Related papers (2023-09-19T03:50:30Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models [1.8752655643513647]
XAI tools can increase the vulnerability of model extraction attacks, which is a concern when model owners prefer black-box access.
We propose a novel retraining (learning) based model extraction attack framework against interpretable models under black-box settings.
We show that AUTOLYCUS is highly effective, requiring significantly fewer queries compared to state-of-the-art attacks.
arXiv Detail & Related papers (2023-02-04T13:23:39Z) - ComplAI: Theory of A Unified Framework for Multi-factor Assessment of
Black-Box Supervised Machine Learning Models [6.279863832853343]
ComplAI is a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior.
It evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective.
arXiv Detail & Related papers (2022-12-30T08:48:19Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - COVID-Net Biochem: An Explainability-driven Framework to Building
Machine Learning Models for Predicting Survival and Kidney Injury of COVID-19
Patients from Clinical and Biochemistry Data [66.43957431843324]
We introduce COVID-Net Biochem, a versatile and explainable framework for constructing machine learning models.
We apply this framework to predict COVID-19 patient survival and the likelihood of developing Acute Kidney Injury during hospitalization.
arXiv Detail & Related papers (2022-04-24T07:38:37Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - (Un)fairness in Post-operative Complication Prediction Models [20.16366948502659]
We consider a real-life example of risk estimation before surgery and investigate the potential for bias or unfairness of a variety of algorithms.
Our approach creates transparent documentation of potential bias so that the users can apply the model carefully.
arXiv Detail & Related papers (2020-11-03T22:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.