Explainable Transformer Prototypes for Medical Diagnoses
- URL: http://arxiv.org/abs/2403.06961v1
- Date: Mon, 11 Mar 2024 17:46:21 GMT
- Title: Explainable Transformer Prototypes for Medical Diagnoses
- Authors: Ugur Demir, Debesh Jha, Zheyuan Zhang, Elif Keles, Bradley Allen,
Aggelos K. Katsaggelos, Ulas Bagci
- Abstract summary: Self-attention feature of transformers contributes towards identifying crucial regions during the classification process.
Our research endeavors to innovate a unique attention block that underscores the correlation between'regions' rather than 'pixels'
A combined quantitative and qualitative methodological approach was used to demonstrate the effectiveness of the proposed method on a large-scale NIH chest X-ray dataset.
- Score: 7.680878119988482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deployments of artificial intelligence in medical diagnostics mandate not
just accuracy and efficacy but also trust, emphasizing the need for
explainability in machine decisions. The recent trend in automated medical
image diagnostics leans towards the deployment of Transformer-based
architectures, credited to their impressive capabilities. Since the
self-attention feature of transformers contributes towards identifying crucial
regions during the classification process, they enhance the trustability of the
methods. However, the complex intricacies of these attention mechanisms may
fall short of effectively pinpointing the regions of interest directly
influencing AI decisions. Our research endeavors to innovate a unique attention
block that underscores the correlation between 'regions' rather than 'pixels'.
To address this challenge, we introduce an innovative system grounded in
prototype learning, featuring an advanced self-attention mechanism that goes
beyond conventional ad-hoc visual explanation techniques by offering
comprehensible visual insights. A combined quantitative and qualitative
methodological approach was used to demonstrate the effectiveness of the
proposed method on the large-scale NIH chest X-ray dataset. Experimental
results showed that our proposed method offers a promising direction for
explainability, which can lead to the development of more trustable systems,
which can facilitate easier and rapid adoption of such technology into routine
clinics. The code is available at www.github.com/NUBagcilab/r2r_proto.
Related papers
- Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer
Learning Method [0.0]
This research paper focuses on Acute Lymphoblastic Leukemia (ALL), a form of blood cancer prevalent in children and teenagers.
It proposes an automated detection approach using computer-aided diagnostic (CAD) models, leveraging deep learning techniques.
The proposed method achieved an impressive 98.38% accuracy, outperforming other tested models.
arXiv Detail & Related papers (2023-12-01T10:37:02Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - DARE: Towards Robust Text Explanations in Biomedical and Healthcare
Applications [54.93807822347193]
We show how to adapt attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility.
Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE.
Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.
arXiv Detail & Related papers (2023-07-05T08:11:40Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Parameter-Efficient Transformer with Hybrid Axial-Attention for Medical
Image Segmentation [10.441315305453504]
We propose a parameter-efficient transformer to explore intrinsic inductive bias via position information for medical image segmentation.
Motivated by this, we present a novel Hybrid Axial-Attention (HAA) that can be equipped with spatial pixel-wise information and relative position information as inductive bias.
arXiv Detail & Related papers (2022-11-17T13:54:55Z) - An Interactive Interpretability System for Breast Cancer Screening with
Deep Learning [11.28741778902131]
We propose an interactive system to take advantage of state-of-the-art interpretability techniques to assist radiologists with breast cancer screening.
Our system integrates a deep learning model into the radiologists' workflow and provides novel interactions to promote understanding of the model's decision-making process.
arXiv Detail & Related papers (2022-09-30T02:19:49Z) - Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography [70.37371604119826]
Building AI models with trustworthiness is important especially in regulated areas such as healthcare.
Previous work uses convolutional neural networks as the backbone architecture, which has shown to be prone to over-caution and overconfidence in making decisions.
We propose a feature learning approach using Vision Transformers, which use an attention-based mechanism.
arXiv Detail & Related papers (2022-07-19T14:55:42Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.