Concept-based Explainable Malignancy Scoring on Pulmonary Nodules in CT Images
- URL: http://arxiv.org/abs/2405.17483v1
- Date: Fri, 24 May 2024 13:36:44 GMT
- Title: Concept-based Explainable Malignancy Scoring on Pulmonary Nodules in CT Images
- Authors: Rinat I. Dumaev, Sergei A. Molodyakov, Lev V. Utkin,
- Abstract summary: An interpretable model based on applying the generalized additive models and the concept-based learning is proposed.
The model detects a set of clinically significant attributes in addition to the final regression score and learns the association between the lung nodule attributes and a final diagnosis decision.
- Score: 2.2120851074630177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To increase the transparency of modern computer-aided diagnosis (CAD) systems for assessing the malignancy of lung nodules, an interpretable model based on applying the generalized additive models and the concept-based learning is proposed. The model detects a set of clinically significant attributes in addition to the final malignancy regression score and learns the association between the lung nodule attributes and a final diagnosis decision as well as their contributions into the decision. The proposed concept-based learning framework provides human-readable explanations in terms of different concepts (numerical and categorical), their values, and their contribution to the final prediction. Numerical experiments with the LIDC-IDRI dataset demonstrate that the diagnosis results obtained using the proposed model, which explicitly explores internal relationships, are in line with similar patterns observed in clinical practice. Additionally, the proposed model shows the competitive classification and the nodule attribute scoring performance, highlighting its potential for effective decision-making in the lung nodule diagnosis.
Related papers
- Methodological Explainability Evaluation of an Interpretable Deep Learning Model for Post-Hepatectomy Liver Failure Prediction Incorporating Counterfactual Explanations and Layerwise Relevance Propagation: A Prospective In Silico Trial [13.171582596404313]
We developed a variational autoencoder-multilayer perceptron (VAE-MLP) model for preoperative PHLF prediction.
This model integrated counterfactuals and layerwise relevance propagation (LRP) to provide insights into its decision-making mechanism.
Results from the three-track in silico clinical trial showed that clinicians' prediction accuracy and confidence increased when AI explanations were provided.
arXiv Detail & Related papers (2024-08-07T13:47:32Z) - Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach [0.0]
We use Human-in-the-Loop related techniques and medical guidelines as a source of domain knowledge to establish the importance of the different features that are relevant to establish a pancreatic cancer treatment.
We propose the use of similarity measures such as the weighted Jaccard Similarity coefficient to facilitate interpretation of explanatory results.
arXiv Detail & Related papers (2024-03-28T20:11:34Z) - Histopathologic Cancer Detection [0.0]
This work uses the PatchCamelyon benchmark datasets and trains them in a multi-layer perceptron and convolution model to observe the model's performance in terms of precision Recall, F1 Score, Accuracy, and AUC Score.
Also, this paper introduced ResNet50 and InceptionNet models with data augmentation, where ResNet50 is able to beat the state-of-the-art model.
arXiv Detail & Related papers (2023-11-13T19:51:46Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision [12.306688233127312]
We introduce a human-in-the-loop framework in the model training process.
Our method can automatically discover confounding factors.
It is capable of learning confounding concepts using easily obtained concept exemplars.
arXiv Detail & Related papers (2023-03-02T01:02:18Z) - Faithful learning with sure data for lung nodule diagnosis [34.55176532924471]
We propose a collaborative learning framework to facilitate sure nodule classification.
A loss function is designed to learn reliable features by introducing interpretability constraints regulated with nodule segmentation maps.
arXiv Detail & Related papers (2022-02-25T06:33:11Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - What Do You See in this Patient? Behavioral Testing of Clinical NLP
Models [69.09570726777817]
We introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input.
We show that model behavior varies drastically even when fine-tuned on the same data and that allegedly best-performing models have not always learned the most medically plausible patterns.
arXiv Detail & Related papers (2021-11-30T15:52:04Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.