Explainable deep learning models in medical image analysis
- URL: http://arxiv.org/abs/2005.13799v1
- Date: Thu, 28 May 2020 06:31:05 GMT
- Title: Explainable deep learning models in medical image analysis
- Authors: Amitojdeep Singh, Sourya Sengupta, Vasudevan Lakshminarayanan
- Abstract summary: Methods have been very effective for a variety of medical diagnostic tasks and has even beaten human experts on some of those.
Recent explainability studies aim to show the features that influence the decision of a model the most.
A review of the current applications of explainable deep learning for different medical imaging tasks is presented here.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning methods have been very effective for a variety of medical
diagnostic tasks and has even beaten human experts on some of those. However,
the black-box nature of the algorithms has restricted clinical use. Recent
explainability studies aim to show the features that influence the decision of
a model the most. The majority of literature reviews of this area have focused
on taxonomy, ethics, and the need for explanations. A review of the current
applications of explainable deep learning for different medical imaging tasks
is presented here. The various approaches, challenges for clinical deployment,
and the areas requiring further research are discussed here from a practical
standpoint of a deep learning researcher designing a system for the clinical
end-users.
Related papers
- How Deep is your Guess? A Fresh Perspective on Deep Learning for Medical Time-Series Imputation [6.547981908229007]
We introduce a novel classification framework for time-series imputation using deep learning.
By identifying conceptual gaps in the literature and existing reviews, we devise a taxonomy grounded on the inductive bias of neural imputation frameworks.
arXiv Detail & Related papers (2024-07-11T12:33:28Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Beyond Known Reality: Exploiting Counterfactual Explanations for Medical
Research [1.6574413179773761]
Our study uses counterfactual explanations to explore the applicability of "what if?" scenarios in medical research.
Our aim is to expand our understanding of magnetic resonance imaging (MRI) features used for diagnosing pediatric posterior fossa brain tumors.
arXiv Detail & Related papers (2023-07-05T09:14:09Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - Explainable Deep Learning Methods in Medical Image Classification: A
Survey [0.0]
State-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data.
These models are hardly adopted in clinical, mainly due to their lack of interpretability.
The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models.
arXiv Detail & Related papers (2022-05-10T09:28:14Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Recent advances and clinical applications of deep learning in medical
image analysis [7.132678647070632]
We reviewed and summarized more than 200 recently published papers to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks.
Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical images.
arXiv Detail & Related papers (2021-05-27T18:05:12Z) - Machine Learning Methods for Histopathological Image Analysis: A Review [62.14548392474976]
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis.
One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems.
arXiv Detail & Related papers (2021-02-07T19:12:32Z) - Deep Learning for Medical Anomaly Detection -- A Survey [38.32234937094937]
This survey is to provide a thorough theoretical analysis of popular deep learning techniques in medical anomaly detection.
We contribute a coherent and systematic review of state-of-the-art techniques, comparing and contrasting their architectural differences as well as training algorithms.
In addition, we outline the key limitations of existing deep medical anomaly detection techniques and propose key research directions for further investigation.
arXiv Detail & Related papers (2020-12-04T02:09:37Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.