Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop
- URL: http://arxiv.org/abs/2104.04968v1
- Date: Sun, 11 Apr 2021 09:16:29 GMT
- Title: Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop
- Authors: Yan Han, Chongyan Chen, Ahmed Tewfik, Benjamin Glicksberg, Ying Ding,
Yifan Peng, Zhangyang Wang
- Abstract summary: We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
- Score: 63.81818077092879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building a highly accurate predictive model for these tasks usually requires
a large number of manually annotated labels and pixel regions (bounding boxes)
of abnormalities. However, it is expensive to acquire such annotations,
especially the bounding boxes. Recently, contrastive learning has shown strong
promise in leveraging unlabeled natural images to produce highly generalizable
and discriminative features. However, extending its power to the medical image
domain is under-explored and highly non-trivial, since medical images are much
less amendable to data augmentations. In contrast, their domain knowledge, as
well as multi-modality information, is often crucial. To bridge this gap, we
propose an end-to-end semi-supervised cross-modal contrastive learning
framework, that simultaneously performs disease classification and localization
tasks. The key knob of our framework is a unique positive sampling approach
tailored for the medical images, by seamlessly integrating radiomic features as
an auxiliary modality. Specifically, we first apply an image encoder to
classify the chest X-rays and to generate the image features. We next leverage
Grad-CAM to highlight the crucial (abnormal) regions for chest X-rays (even
when unannotated), from which we extract radiomic features. The radiomic
features are then passed through another dedicated encoder to act as the
positive sample for the image features generated from the same chest X-ray. In
this way, our framework constitutes a feedback loop for image and radiomic
modality features to mutually reinforce each other. Their contrasting yields
cross-modality representations that are both robust and interpretable.
Extensive experiments on the NIH Chest X-ray dataset demonstrate that our
approach outperforms existing baselines in both classification and localization
tasks.
Related papers
- Introducing Shape Prior Module in Diffusion Model for Medical Image
Segmentation [7.7545714516743045]
We propose an end-to-end framework called VerseDiff-UNet, which leverages the denoising diffusion probabilistic model (DDPM)
Our approach integrates the diffusion model into a standard U-shaped architecture.
We evaluate our method on a single dataset of spine images acquired through X-ray imaging.
arXiv Detail & Related papers (2023-09-12T03:05:00Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Contrastive Attention for Automatic Chest X-ray Report Generation [124.60087367316531]
In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report.
We propose Contrastive Attention (CA) model, which compares the current input image with normal images to distill the contrastive information.
We achieve the state-of-the-art results on the two public datasets.
arXiv Detail & Related papers (2021-06-13T11:20:31Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Using Radiomics as Prior Knowledge for Thorax Disease Classification and
Localization in Chest X-rays [14.679677447702653]
We develop an end-to-end framework, ChexRadiNet, that can utilize the radiomics features to improve the abnormality classification performance.
We evaluate the ChexRadiNet framework using three public datasets: NIH ChestX-ray, CheXpert, and MIMIC-CXR.
arXiv Detail & Related papers (2020-11-25T04:16:38Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.