survex: an R package for explaining machine learning survival models
- URL: http://arxiv.org/abs/2308.16113v2
- Date: Tue, 21 Nov 2023 15:50:05 GMT
- Title: survex: an R package for explaining machine learning survival models
- Authors: Miko{\l}aj Spytek and Mateusz Krzyzi\'nski and Sophie Hanna Langbein
and Hubert Baniecki and Marvin N. Wright and Przemys{\l}aw Biecek
- Abstract summary: We introduce the survex R package, which provides a framework for explaining any survival model by applying artificial intelligence techniques.
The capabilities of the proposed software encompass understanding and diagnosing survival models, which can lead to their improvement.
- Score: 8.028581359682239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to their flexibility and superior performance, machine learning models
frequently complement and outperform traditional statistical survival models.
However, their widespread adoption is hindered by a lack of user-friendly tools
to explain their internal operations and prediction rationales. To tackle this
issue, we introduce the survex R package, which provides a cohesive framework
for explaining any survival model by applying explainable artificial
intelligence techniques. The capabilities of the proposed software encompass
understanding and diagnosing survival models, which can lead to their
improvement. By revealing insights into the decision-making process, such as
variable effects and importances, survex enables the assessment of model
reliability and the detection of biases. Thus, transparency and responsibility
may be promoted in sensitive areas, such as biomedical research and healthcare
applications.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks [9.93411316886105]
Self-eXplainable AI (S-XAI) incorporates explainability directly into the training process of deep learning models.
This survey presents a comprehensive review across various image modalities and clinical applications.
arXiv Detail & Related papers (2024-10-03T09:29:28Z) - Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective [111.58315434849047]
robustness of neural information retrieval models (IR) models has garnered significant attention.
We view the robustness of IR to be a multifaceted concept, emphasizing its necessity against adversarial attacks, out-of-distribution (OOD) scenarios and performance variance.
We provide an in-depth discussion of existing methods, datasets, and evaluation metrics, shedding light on challenges and future directions in the era of large language models.
arXiv Detail & Related papers (2024-07-09T16:07:01Z) - X-SHIELD: Regularization for eXplainable Artificial Intelligence [9.658282892513386]
XAI may be used to improve model performance while boosting its explainability.
Within this family, we propose the XAI - SHIELD(X-SHIELD), a regularization for explainable artificial intelligence.
The improvement is validated through experiments comparing models with and without the X-SHIELD regularization.
arXiv Detail & Related papers (2024-04-03T09:56:38Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics [0.0]
We develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained.
These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions.
We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.
arXiv Detail & Related papers (2023-02-23T15:28:36Z) - ComplAI: Theory of A Unified Framework for Multi-factor Assessment of
Black-Box Supervised Machine Learning Models [6.279863832853343]
ComplAI is a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior.
It evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective.
arXiv Detail & Related papers (2022-12-30T08:48:19Z) - COVID-Net Biochem: An Explainability-driven Framework to Building
Machine Learning Models for Predicting Survival and Kidney Injury of COVID-19
Patients from Clinical and Biochemistry Data [66.43957431843324]
We introduce COVID-Net Biochem, a versatile and explainable framework for constructing machine learning models.
We apply this framework to predict COVID-19 patient survival and the likelihood of developing Acute Kidney Injury during hospitalization.
arXiv Detail & Related papers (2022-04-24T07:38:37Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - (Un)fairness in Post-operative Complication Prediction Models [20.16366948502659]
We consider a real-life example of risk estimation before surgery and investigate the potential for bias or unfairness of a variety of algorithms.
Our approach creates transparent documentation of potential bias so that the users can apply the model carefully.
arXiv Detail & Related papers (2020-11-03T22:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.