Using Radiomics as Prior Knowledge for Thorax Disease Classification and
Localization in Chest X-rays
- URL: http://arxiv.org/abs/2011.12506v3
- Date: Fri, 9 Jul 2021 20:29:44 GMT
- Title: Using Radiomics as Prior Knowledge for Thorax Disease Classification and
Localization in Chest X-rays
- Authors: Yan Han, Chongyan Chen, Liyan Tang, Mingquan Lin, Ajay Jaiswal, Song
Wang, Ahmed Tewfik, George Shih, Ying Ding, Yifan Peng
- Abstract summary: We develop an end-to-end framework, ChexRadiNet, that can utilize the radiomics features to improve the abnormality classification performance.
We evaluate the ChexRadiNet framework using three public datasets: NIH ChestX-ray, CheXpert, and MIMIC-CXR.
- Score: 14.679677447702653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chest X-ray becomes one of the most common medical diagnoses due to its
noninvasiveness. The number of chest X-ray images has skyrocketed, but reading
chest X-rays still have been manually performed by radiologists, which creates
huge burnouts and delays. Traditionally, radiomics, as a subfield of radiology
that can extract a large number of quantitative features from medical images,
demonstrates its potential to facilitate medical imaging diagnosis before the
deep learning era. In this paper, we develop an end-to-end framework,
ChexRadiNet, that can utilize the radiomics features to improve the abnormality
classification performance. Specifically, ChexRadiNet first applies a
light-weight but efficient triplet-attention mechanism to classify the chest
X-rays and highlight the abnormal regions. Then it uses the generated class
activation map to extract radiomic features, which further guides our model to
learn more robust image features. After a number of iterations and with the
help of radiomic features, our framework can converge to more accurate image
regions. We evaluate the ChexRadiNet framework using three public datasets: NIH
ChestX-ray, CheXpert, and MIMIC-CXR. We find that ChexRadiNet outperforms the
state-of-the-art on both disease detection (0.843 in AUC) and localization
(0.679 in T(IoU) = 0.1). We will make the code publicly available at
https://github.com/bionlplab/lung_disease_detection_amia2021, with the hope
that this method can facilitate the development of automatic systems with a
higher-level understanding of the radiological world.
Related papers
- Computer-Aided Diagnosis of Thoracic Diseases in Chest X-rays using hybrid CNN-Transformer Architecture [1.0878040851637998]
An automated computer-aided diagnosis system can interpret chest X-rays to augment radiologists by providing actionable insights.
In this study, we applied a novel architecture augmenting the DenseNet121 Convolutional Neural Network (CNN) with multi-head self-attention mechanism.
Experimental results show that augmenting CNN with self-attention has potential in diagnosing different thoracic diseases from chest X-rays.
arXiv Detail & Related papers (2024-04-18T01:46:31Z) - Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis [88.86777314004044]
We propose a 3D Gaussian splatting-based framework, namely X-Gaussian, for X-ray novel view visualization.
Experiments show that our X-Gaussian outperforms state-of-the-art methods by 6.5 dB while enjoying less than 15% training time and over 73x inference speed.
arXiv Detail & Related papers (2024-03-07T00:12:08Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Artificial Intelligence for Automatic Detection and Classification
Disease on the X-Ray Images [0.0]
This work presents rapid detection of diseases in the lung using the efficient Deep learning pre-trained RepVGG algorithm.
We are applying Artificial Intelligence technology for automatic highlighted detection of affected areas of people's lungs.
arXiv Detail & Related papers (2022-11-14T03:51:12Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Pneumonia Detection on Chest X-ray using Radiomic Features and
Contrastive Learning [26.031452674698787]
We propose a novel framework that leverages radiomics features and contrastive learning to detect pneumonia in chest X-ray.
Experiments on the RSNA Pneumonia Detection Challenge dataset show that our model achieves superior results to several state-of-the-art models.
arXiv Detail & Related papers (2021-01-12T02:52:24Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.