Diagnosis of Paratuberculosis in Histopathological Images Based on
Explainable Artificial Intelligence and Deep Learning
- URL: http://arxiv.org/abs/2208.01674v1
- Date: Tue, 2 Aug 2022 18:05:26 GMT
- Title: Diagnosis of Paratuberculosis in Histopathological Images Based on
Explainable Artificial Intelligence and Deep Learning
- Authors: Tuncay Yi\u{g}it, Nilg\"un \c{S}eng\"oz, \"Ozlem \"Ozmen, Jude
Hemanth, Ali Hakan I\c{s}{\i}k
- Abstract summary: This study examines a new and original dataset using the deep learning algorithm, and visualizes the output with gradient-weighted class activation mapping (Grad-CAM)
Both the decision-making processes and the explanations were verified, and the accuracy of the output was tested.
The research results greatly help pathologists in the diagnosis of paratuberculosis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence holds great promise in medical imaging, especially
histopathological imaging. However, artificial intelligence algorithms cannot
fully explain the thought processes during decision-making. This situation has
brought the problem of explainability, i.e., the black box problem, of
artificial intelligence applications to the agenda: an algorithm simply
responds without stating the reasons for the given images. To overcome the
problem and improve the explainability, explainable artificial intelligence
(XAI) has come to the fore, and piqued the interest of many researchers.
Against this backdrop, this study examines a new and original dataset using the
deep learning algorithm, and visualizes the output with gradient-weighted class
activation mapping (Grad-CAM), one of the XAI applications. Afterwards, a
detailed questionnaire survey was conducted with the pathologists on these
images. Both the decision-making processes and the explanations were verified,
and the accuracy of the output was tested. The research results greatly help
pathologists in the diagnosis of paratuberculosis.
Related papers
- Adversarial Neural Networks in Medical Imaging Advancements and Challenges in Semantic Segmentation [6.88255677115486]
Recent advancements in artificial intelligence (AI) have precipitated a paradigm shift in medical imaging.
This paper systematically investigates the integration of deep learning -- a principal branch of AI -- into the semantic segmentation of brain images.
adversarial neural networks, a novel AI approach that not only automates but also refines the semantic segmentation process.
arXiv Detail & Related papers (2024-10-17T00:05:05Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - The Brain Tumor Segmentation (BraTS) Challenge: Local Synthesis of Healthy Brain Tissue via Inpainting [50.01582455004711]
For brain tumor patients, the image acquisition time series typically starts with an already pathological scan.
Many algorithms are designed to analyze healthy brains and provide no guarantee for images featuring lesions.
Examples include, but are not limited to, algorithms for brain anatomy parcellation, tissue segmentation, and brain extraction.
Here, the participants explore inpainting techniques to synthesize healthy brain scans from lesioned ones.
arXiv Detail & Related papers (2023-05-15T20:17:03Z) - eXplainable Artificial Intelligence on Medical Images: A Survey [0.0]
A recent field in the machine learning area is explainable artificial intelligence, also known as XAI, which targets to explain the results of such black box models.
This survey analyses several recent studies in the XAI field applied to medical diagnosis research, allowing some explainability of the machine learning results in several different diseases.
arXiv Detail & Related papers (2023-05-12T14:25:42Z) - Analysis of Explainable Artificial Intelligence Methods on Medical Image
Classification [0.0]
The use of deep learning in computer vision tasks such as image classification has led to a rapid increase in the performance of such systems.
Medical image classification systems are being adopted due to their high accuracy and near parity with human physicians in many tasks.
The research techniques being used to gain insight into the black-box models are in the field of explainable artificial intelligence (XAI)
arXiv Detail & Related papers (2022-12-10T06:17:43Z) - Explainable Deep Learning Methods in Medical Image Classification: A
Survey [0.0]
State-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data.
These models are hardly adopted in clinical, mainly due to their lack of interpretability.
The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models.
arXiv Detail & Related papers (2022-05-10T09:28:14Z) - Unsupervised deep learning techniques for powdery mildew recognition
based on multispectral imaging [63.62764375279861]
This paper presents a deep learning approach to automatically recognize powdery mildew on cucumber leaves.
We focus on unsupervised deep learning techniques applied to multispectral imaging data.
We propose the use of autoencoder architectures to investigate two strategies for disease detection.
arXiv Detail & Related papers (2021-12-20T13:29:13Z) - Machine Learning Methods for Histopathological Image Analysis: A Review [62.14548392474976]
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis.
One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems.
arXiv Detail & Related papers (2021-02-07T19:12:32Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.