Interpretable Machine Learning Classifiers for Brain Tumour Survival
Prediction
- URL: http://arxiv.org/abs/2106.09424v1
- Date: Thu, 17 Jun 2021 12:17:10 GMT
- Title: Interpretable Machine Learning Classifiers for Brain Tumour Survival
Prediction
- Authors: Colleen E. Charlton and Michael Tin Chung Poon and Paul M. Brennan and
Jacques D. Fleuriot
- Abstract summary: We use a novel brain tumour dataset to compare two interpretable rule list models against popular machine learning approaches for brain tumour survival prediction.
We demonstrate that rule list algorithms produced simple decision lists that align with clinical expertise.
- Score: 0.45880283710344055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prediction of survival in patients diagnosed with a brain tumour is
challenging because of heterogeneous tumour behaviours and responses to
treatment. Better estimations of prognosis would support treatment planning and
patient support. Advances in machine learning have informed development of
clinical predictive models, but their integration into clinical practice is
almost non-existent. One reasons for this is the lack of interpretability of
models. In this paper, we use a novel brain tumour dataset to compare two
interpretable rule list models against popular machine learning approaches for
brain tumour survival prediction. All models are quantitatively evaluated using
standard performance metrics. The rule lists are also qualitatively assessed
for their interpretability and clinical utility. The interpretability of the
black box machine learning models is evaluated using two post-hoc explanation
techniques, LIME and SHAP. Our results show that the rule lists were only
slightly outperformed by the black box models. We demonstrate that rule list
algorithms produced simple decision lists that align with clinical expertise.
By comparison, post-hoc interpretability methods applied to black box models
may produce unreliable explanations of local model predictions. Model
interpretability is essential for understanding differences in predictive
performance and for integration into clinical practice.
Related papers
- Predictive Modeling for Breast Cancer Classification in the Context of Bangladeshi Patients: A Supervised Machine Learning Approach with Explainable AI [0.0]
We evaluate and compare the classification accuracy, precision, recall, and F-1 scores of five different machine learning methods.
XGBoost achieved the best model accuracy, which is 97%.
arXiv Detail & Related papers (2024-04-06T17:23:21Z) - Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach [0.0]
We use Human-in-the-Loop related techniques and medical guidelines as a source of domain knowledge to establish the importance of the different features that are relevant to establish a pancreatic cancer treatment.
We propose the use of similarity measures such as the weighted Jaccard Similarity coefficient to facilitate interpretation of explanatory results.
arXiv Detail & Related papers (2024-03-28T20:11:34Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Global and Local Interpretation of black-box Machine Learning models to
determine prognostic factors from early COVID-19 data [0.0]
We analyze COVID-19 blood work data with some of the popular machine learning models.
We employ state-of-the-art post-hoc local interpretability techniques and symbolic metamodeling to draw interpretable conclusions.
We explore one of the most recent techniques called symbolic metamodeling to find the mathematical expression of the machine learning models for COVID-19.
arXiv Detail & Related papers (2021-09-10T20:00:47Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Patient-independent Epileptic Seizure Prediction using Deep Learning
Models [39.19336481493405]
The purpose of a seizure prediction system is to successfully identify the pre-ictal brain stage, which occurs before a seizure event.
Patient-independent seizure prediction models are designed to offer accurate performance across multiple subjects within a dataset.
We propose two patient-independent deep learning architectures with different learning strategies that can learn a global function utilizing data from multiple subjects.
arXiv Detail & Related papers (2020-11-18T23:13:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.