Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography
- URL: http://arxiv.org/abs/2207.09312v1
- Date: Tue, 19 Jul 2022 14:55:42 GMT
- Title: Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography
- Authors: Kai Ma, Pengcheng Xi, Karim Habashy, Ashkan Ebadi, St\'ephane
Tremblay, Alexander Wong
- Abstract summary: Building AI models with trustworthiness is important especially in regulated areas such as healthcare.
Previous work uses convolutional neural networks as the backbone architecture, which has shown to be prone to over-caution and overconfidence in making decisions.
We propose a feature learning approach using Vision Transformers, which use an attention-based mechanism.
- Score: 70.37371604119826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building AI models with trustworthiness is important especially in regulated
areas such as healthcare. In tackling COVID-19, previous work uses
convolutional neural networks as the backbone architecture, which has shown to
be prone to over-caution and overconfidence in making decisions, rendering them
less trustworthy -- a crucial flaw in the context of medical imaging. In this
study, we propose a feature learning approach using Vision Transformers, which
use an attention-based mechanism, and examine the representation learning
capability of Transformers as a new backbone architecture for medical imaging.
Through the task of classifying COVID-19 chest radiographs, we investigate into
whether generalization capabilities benefit solely from Vision Transformers'
architectural advances. Quantitative and qualitative evaluations are conducted
on the trustworthiness of the models, through the use of "trust score"
computation and a visual explainability technique. We conclude that the
attention-based feature learning approach is promising in building trustworthy
deep learning models for healthcare.
Related papers
- Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - Explainable Transformer Prototypes for Medical Diagnoses [7.680878119988482]
Self-attention feature of transformers contributes towards identifying crucial regions during the classification process.
Our research endeavors to innovate a unique attention block that underscores the correlation between'regions' rather than 'pixels'
A combined quantitative and qualitative methodological approach was used to demonstrate the effectiveness of the proposed method on a large-scale NIH chest X-ray dataset.
arXiv Detail & Related papers (2024-03-11T17:46:21Z) - Adversarial-Robust Transfer Learning for Medical Imaging via Domain
Assimilation [17.46080957271494]
The scarcity of publicly available medical images has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images.
A significant em domain discrepancy exists between natural and medical images, which causes AI models to exhibit heightened em vulnerability to adversarial attacks.
This paper proposes a em domain assimilation approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion.
arXiv Detail & Related papers (2024-02-25T06:39:15Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - A Trustworthy Framework for Medical Image Analysis with Deep Learning [71.48204494889505]
TRUDLMIA is a trustworthy deep learning framework for medical image analysis.
It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
arXiv Detail & Related papers (2022-12-06T05:30:22Z) - Understanding The Robustness in Vision Transformers [140.1090560977082]
Self-attention may promote robustness through improved mid-level representations.
We propose a family of fully attentional networks (FANs) that strengthen this capability.
Our model achieves a state of-the-art 87.1% accuracy and 35.8% mCE on ImageNet-1k and ImageNet-C with 76.8M parameters.
arXiv Detail & Related papers (2022-04-26T17:16:32Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.