Towards Trust of Explainable AI in Thyroid Nodule Diagnosis
- URL: http://arxiv.org/abs/2303.04731v1
- Date: Wed, 8 Mar 2023 17:18:13 GMT
- Title: Towards Trust of Explainable AI in Thyroid Nodule Diagnosis
- Authors: Truong Thanh Hung Nguyen, Van Binh Truong, Vo Thanh Khang Nguyen, Quoc
Hung Cao, Quoc Khanh Nguyen
- Abstract summary: We apply state-of-the-art eXplainable artificial intelligence (XAI) methods to explain the prediction of the black-box AI models in the thyroid nodule diagnosis application.
We propose new statistic-based XAI methods, namely Kernel Density Estimation and Density map, to explain the case of no nodule detected.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to explain the prediction of deep learning models to end-users is
an important feature to leverage the power of artificial intelligence (AI) for
the medical decision-making process, which is usually considered
non-transparent and challenging to comprehend. In this paper, we apply
state-of-the-art eXplainable artificial intelligence (XAI) methods to explain
the prediction of the black-box AI models in the thyroid nodule diagnosis
application. We propose new statistic-based XAI methods, namely Kernel Density
Estimation and Density map, to explain the case of no nodule detected. XAI
methods' performances are considered under a qualitative and quantitative
comparison as feedback to improve the data quality and the model performance.
Finally, we survey to assess doctors' and patients' trust in XAI explanations
of the model's decisions on thyroid nodule images.
Related papers
- Uncovering Knowledge Gaps in Radiology Report Generation Models through Knowledge Graphs [18.025481751074214]
We introduce a system, named ReXKG, which extracts structured information from processed reports to construct a radiology knowledge graph.
We conduct an in-depth comparative analysis of AI-generated and human-written radiology reports, assessing the performance of both specialist and generalist models.
arXiv Detail & Related papers (2024-08-26T16:28:56Z) - Robustness of Explainable Artificial Intelligence in Industrial Process Modelling [43.388607981317016]
We evaluate current XAI methods by scoring them based on ground truth simulations and sensitivity analysis.
We show the differences between XAI methods in their ability to correctly predict the true sensitivity of the modeled industrial process.
arXiv Detail & Related papers (2024-07-12T09:46:26Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - Enhancing Breast Cancer Diagnosis in Mammography: Evaluation and Integration of Convolutional Neural Networks and Explainable AI [0.0]
The study presents an integrated framework combining Convolutional Neural Networks (CNNs) and Explainable Artificial Intelligence (XAI) for the enhanced diagnosis of breast cancer.
The methodology encompasses an elaborate data preprocessing pipeline and advanced data augmentation techniques to counteract dataset limitations.
A focal point of our study is the evaluation of XAI's effectiveness in interpreting model predictions.
arXiv Detail & Related papers (2024-04-05T05:00:21Z) - An Explainable AI Framework for Artificial Intelligence of Medical
Things [2.7774194651211217]
We leverage a custom XAI framework, incorporating techniques such as Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Gradient-weighted Class Activation Mapping (Grad-Cam)
The proposed framework enhances the effectiveness of strategic healthcare methods and aims to instill trust and promote understanding in AI-driven medical applications.
We apply the XAI framework to brain tumor detection as a use case demonstrating accurate and transparent diagnosis.
arXiv Detail & Related papers (2024-03-07T01:08:41Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - GENIE-NF-AI: Identifying Neurofibromatosis Tumors using Liquid Neural
Network (LTC) trained on AACR GENIE Datasets [0.0]
We propose an interpretable AI approach to diagnose patients with neurofibromatosis.
Our proposed approach outperformed existing models with 99.86% accuracy.
arXiv Detail & Related papers (2023-04-26T10:28:59Z) - Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can
Existing Algorithms Fulfill Clinical Requirements? [42.75635888823057]
Heatmap is a form of explanation that highlights important features for AI models' prediction.
It is unknown how well heatmaps perform on explaining decisions on multi-modal medical images.
We propose the modality-specific feature importance (MSFI) metric to tackle this clinically important but technically ignored problem.
arXiv Detail & Related papers (2022-03-12T17:18:16Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.