CrossEAI: Using Explainable AI to generate better bounding boxes for
Chest X-ray images
- URL: http://arxiv.org/abs/2310.19835v1
- Date: Sun, 29 Oct 2023 17:48:39 GMT
- Title: CrossEAI: Using Explainable AI to generate better bounding boxes for
Chest X-ray images
- Authors: Jinze Zhao
- Abstract summary: In medical imaging diagnosis, disease classification usually achieves high accuracy, but generated bounding boxes have much lower Intersection over Union (IoU)
Previous work shows that bounding boxes generated by these methods are usually larger than ground truth and contain major non-disease area.
This paper utilizes the advantages of post-hoc AI explainable methods to generate bounding boxes for chest x-ray image diagnosis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability is critical for deep learning applications in healthcare which
are mandated to provide interpretations to both patients and doctors according
to legal regulations and responsibilities. Explainable AI methods, such as
feature importance using integrated gradients, model approximation using LIME,
or neuron activation and layer conductance to provide interpretations for
certain health risk predictions. In medical imaging diagnosis, disease
classification usually achieves high accuracy, but generated bounding boxes
have much lower Intersection over Union (IoU). Different methods with
self-supervised or semi-supervised learning strategies have been proposed, but
few improvements have been identified for bounding box generation. Previous
work shows that bounding boxes generated by these methods are usually larger
than ground truth and contain major non-disease area. This paper utilizes the
advantages of post-hoc AI explainable methods to generate bounding boxes for
chest x-ray image diagnosis. In this work, we propose CrossEAI which combines
heatmap and gradient map to generate more targeted bounding boxes. By using
weighted average of Guided Backpropagation and Grad-CAM++, we are able to
generate bounding boxes which are closer to the ground truth. We evaluate our
model on a chest x-ray dataset. The performance has significant improvement
over the state of the art model with the same setting, with $9\%$ improvement
in average of all diseases over all IoU. Moreover, as a model that does not use
any ground truth bounding box information for training, we achieve same
performance in general as the model that uses $80\%$ of the ground truth
bounding box information for training
Related papers
- Class Attention to Regions of Lesion for Imbalanced Medical Image
Recognition [59.28732531600606]
We propose a framework named textbfClass textbfAttention to textbfREgions of the lesion (CARE) to handle data imbalance issues.
The CARE framework needs bounding boxes to represent the lesion regions of rare diseases.
Results show that the CARE variants with automated bounding box generation are comparable to the original CARE framework.
arXiv Detail & Related papers (2023-07-19T15:19:02Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease
Localization in X-ray images [35.18562405272593]
Cross-region and cross-image relationship, as contextual and compensating information, is vital to obtain more consistent and integral regions.
We propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images.
By means of this, our approach achieves the state-of-the-art result on NIH chest X-ray dataset for weakly-supervised disease localization.
arXiv Detail & Related papers (2021-07-14T01:27:07Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Explaining COVID-19 and Thoracic Pathology Model Predictions by
Identifying Informative Input Features [47.45835732009979]
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
Features attribution methods identify the importance of input features for the output prediction.
We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-independent feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.
arXiv Detail & Related papers (2021-04-01T11:42:39Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Evaluating the Clinical Realism of Synthetic Chest X-Rays Generated
Using Progressively Growing GANs [0.0]
Chest x-rays are a vital tool in the workup of many patients.
There is an ever pressing need for greater quantities of labelled data to develop new diagnostic tools.
Previous work has sought to address these concerns by creating class-specific GANs that synthesise images to augment training data.
arXiv Detail & Related papers (2020-10-07T11:47:22Z) - Deep Mining External Imperfect Data for Chest X-ray Disease Screening [57.40329813850719]
We argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges.
We formulate the multi-label disease classification problem as weighted independent binary tasks according to the categories.
Our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability.
arXiv Detail & Related papers (2020-06-06T06:48:40Z) - Exploration of Interpretability Techniques for Deep COVID-19
Classification using Chest X-ray Images [10.01138352319106]
Five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their Ensemble have been used in this paper to classify COVID-19, pneumoniae and healthy subjects using Chest X-Ray images.
The mean Micro-F1 score of the models for COVID-19 classifications ranges from 0.66 to 0.875, and is 0.89 for the Ensemble of the network models.
arXiv Detail & Related papers (2020-06-03T22:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.