Analysis of Explainable Artificial Intelligence Methods on Medical Image
Classification
- URL: http://arxiv.org/abs/2212.10565v1
- Date: Sat, 10 Dec 2022 06:17:43 GMT
- Title: Analysis of Explainable Artificial Intelligence Methods on Medical Image
Classification
- Authors: Vinay Jogani, Joy Purohit, Ishaan Shivhare and Seema C Shrawne
- Abstract summary: The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems.
Medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks.
The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The use of deep learning in computer vision tasks such as image
classification has led to a rapid increase in the performance of such systems.
Due to this substantial increment in the utility of these systems, the use of
artificial intelligence in many critical tasks has exploded. In the medical
domain, medical image classification systems are being adopted due to their
high accuracy and near parity with human physicians in many tasks. However,
these artificial intelligence systems are extremely complex and are considered
black boxes by scientists, due to the difficulty in interpreting what exactly
led to the predictions made by these models. When these systems are being used
to assist high-stakes decision-making, it is extremely important to be able to
understand, verify and justify the conclusions reached by the model. The
research techniques being used to gain insight into the black-box models are in
the field of explainable artificial intelligence (XAI). In this paper, we
evaluated three different XAI methods across two convolutional neural network
models trained to classify lung cancer from histopathological images. We
visualized the outputs and analyzed the performance of these methods, in order
to better understand how to apply explainable artificial intelligence in the
medical domain.
Related papers
- Automated Retinal Image Analysis and Medical Report Generation through Deep Learning [3.4447129363520337]
The increasing prevalence of retinal diseases poses a significant challenge to the healthcare system.
Traditional methods of generating medical reports from retinal images rely on manual interpretation.
This thesis investigates the potential of Artificial Intelligence to automate medical report generation for retinal images.
arXiv Detail & Related papers (2024-08-14T07:47:25Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Explainable AI for Bioinformatics: Methods, Tools, and Applications [1.6855835471222005]
Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models.
In this paper, we discuss the importance of explainability with a focus on bioinformatics.
arXiv Detail & Related papers (2022-12-25T21:00:36Z) - An Interactive Interpretability System for Breast Cancer Screening with
Deep Learning [11.28741778902131]
We propose an interactive system to take advantage of state-of-the-art interpretability techniques to assist radiologists with breast cancer screening.
Our system integrates a deep learning model into the radiologists' workflow and provides novel interactions to promote understanding of the model's decision-making process.
arXiv Detail & Related papers (2022-09-30T02:19:49Z) - Diagnosis of Paratuberculosis in Histopathological Images Based on
Explainable Artificial Intelligence and Deep Learning [0.0]
This study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM)
Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested.
The research results greatly help pathologists in the diagnosis of paratuberculosis.
arXiv Detail & Related papers (2022-08-02T18:05:26Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Unbox the Black-box for the Medical Explainable AI via Multi-modal and
Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond [3.4031539425106683]
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made.
Many of the machine learning algorithms can not manifest how and why a decision has been cast.
XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies.
arXiv Detail & Related papers (2021-02-03T10:56:58Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Learning Binary Semantic Embedding for Histology Image Classification
and Retrieval [56.34863511025423]
We propose a novel method for Learning Binary Semantic Embedding (LBSE)
Based on the efficient and effective embedding, classification and retrieval are performed to provide interpretable computer-assisted diagnosis for histology images.
Experiments conducted on three benchmark datasets validate the superiority of LBSE under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:36:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.