Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays
- URL: http://arxiv.org/abs/2207.04394v3
- Date: Thu, 14 Jul 2022 09:16:33 GMT
- Title: Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays
- Authors: Yan Han, Gregory Holste, Ying Ding, Ahmed Tewfik, Yifan Peng, and
Zhangyang Wang
- Abstract summary: Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
- Score: 65.88435151891369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Before the recent success of deep learning methods for automated medical
image analysis, practitioners used handcrafted radiomic features to
quantitatively describe local patches of medical images. However, extracting
discriminative radiomic features relies on accurate pathology localization,
which is difficult to acquire in real-world settings. Despite advances in
disease classification and localization from chest X-rays, many approaches fail
to incorporate clinically-informed domain knowledge. For these reasons, we
propose a Radiomics-Guided Transformer (RGT) that fuses \textit{global} image
information with \textit{local} knowledge-guided radiomics information to
provide accurate cardiopulmonary pathology localization and classification
\textit{without any bounding box annotations}. RGT consists of an image
Transformer branch, a radiomics Transformer branch, and fusion layers that
aggregate image and radiomic information. Using the learned self-attention of
its image branch, RGT extracts a bounding box for which to compute radiomic
features, which are further processed by the radiomics branch; learned image
and radiomic features are then fused and mutually interact via cross-attention
layers. Thus, RGT utilizes a novel end-to-end feedback loop that can bootstrap
accurate pathology localization only using image-level disease labels.
Experiments on the NIH ChestXRay dataset demonstrate that RGT outperforms prior
works in weakly supervised disease localization (by an average margin of 3.6\%
over various intersection-over-union thresholds) and classification (by 1.1\%
in average area under the receiver operating characteristic curve). We publicly
release our codes and pre-trained models at
\url{https://github.com/VITA-Group/chext}.
Related papers
- DeepLOC: Deep Learning-based Bone Pathology Localization and
Classification in Wrist X-ray Images [1.45543311565555]
This paper presents a novel approach for bone pathology localization and classification in wrist X-ray images.
The proposed methodology addresses two critical challenges in wrist X-ray analysis: accurate localization of bone pathologies and precise classification of abnormalities.
arXiv Detail & Related papers (2023-08-24T12:06:10Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Localization supervision of chest x-ray classifiers using label-specific
eye-tracking annotation [4.8035104863603575]
Eye-tracking (ET) data can be collected in a non-intrusive way during the clinical workflow of a radiologist.
We use ET data recorded from radiologists while dictating CXR reports to train CNNs.
We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of abnormalities.
arXiv Detail & Related papers (2022-07-20T09:26:29Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Using Radiomics as Prior Knowledge for Thorax Disease Classification and
Localization in Chest X-rays [14.679677447702653]
We develop an end-to-end framework, ChexRadiNet, that can utilize the radiomics features to improve the abnormality classification performance.
We evaluate the ChexRadiNet framework using three public datasets: NIH ChestX-ray, CheXpert, and MIMIC-CXR.
arXiv Detail & Related papers (2020-11-25T04:16:38Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.