Generative Residual Attention Network for Disease Detection
- URL: http://arxiv.org/abs/2110.12984v1
- Date: Mon, 25 Oct 2021 14:15:57 GMT
- Title: Generative Residual Attention Network for Disease Detection
- Authors: Euyoung Kim and Soochahn Lee and Kyoung Mu Lee
- Abstract summary: We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
- Score: 51.60842580044539
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate identification and localization of abnormalities from radiology
images serve as a critical role in computer-aided diagnosis (CAD) systems.
Building a highly generalizable system usually requires a large amount of data
with high-quality annotations, including disease-specific global and
localization information. However, in medical images, only a limited number of
high-quality images and annotations are available due to annotation expenses.
In this paper, we explore this problem by presenting a novel approach for
disease generation in X-rays using a conditional generative adversarial
learning. Specifically, given a chest X-ray image from a source domain, we
generate a corresponding radiology image in a target domain while preserving
the identity of the patient. We then use the generated X-ray image in the
target domain to augment our training to improve the detection performance. We
also present a unified framework that simultaneously performs disease
generation and localization.We evaluate the proposed approach on the X-ray
image dataset provided by the Radiological Society of North America (RSNA),
surpassing the state-of-the-art baseline detection algorithms.
Related papers
- Generation of Radiology Findings in Chest X-Ray by Leveraging
Collaborative Knowledge [6.792487817626456]
The cognitive task of interpreting medical images remains the most critical and often time-consuming step in the radiology workflow.
This work focuses on reducing the workload of radiologists who spend most of their time either writing or narrating the Findings.
Unlike past research, which addresses radiology report generation as a single-step image captioning task, we have further taken into consideration the complexity of interpreting CXR images.
arXiv Detail & Related papers (2023-06-18T00:51:28Z) - Unsupervised Iterative U-Net with an Internal Guidance Layer for
Vertebrae Contrast Enhancement in Chest X-Ray Images [1.521162809610347]
We propose a novel and robust approach to improve the quality of X-ray images by iteratively training a deep neural network.
Our framework includes an embedded internal guidance layer that enhances the fine structures of spinal vertebrae in chest X-ray images.
Experimental results demonstrate that our proposed method surpasses existing detail enhancement methods in terms of BRISQUE scores.
arXiv Detail & Related papers (2023-06-06T19:36:11Z) - Local Contrastive Learning for Medical Image Recognition [0.0]
Local Region Contrastive Learning (LRCLR) is a flexible fine-tuning framework that adds layers for significant image region selection and cross-modality interaction.
Our results on an external validation set of chest x-rays suggest that LRCLR identifies significant local image regions and provides meaningful interpretation against radiology text.
arXiv Detail & Related papers (2023-03-24T17:04:26Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Weakly Supervised Thoracic Disease Localization via Disease Masks [29.065791290544983]
weakly supervised localization methods have been proposed that use only image-level annotation.
We propose a spatial attention method using disease masks that describe the areas where diseases mainly occur.
We show that the proposed method results in superior localization performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-01-25T06:52:57Z) - Computer-aided abnormality detection in chest radiographs in a clinical
setting via domain-adaptation [0.23624125155742057]
Deep learning (DL) models are being deployed at medical centers to aid radiologists for diagnosis of lung conditions from chest radiographs.
These pre-trained DL models' ability to generalize in clinical settings is poor because of the changes in data distributions between publicly available and privately held radiographs.
In this work, we introduce a domain-shift detection and removal method to overcome this problem.
arXiv Detail & Related papers (2020-12-19T01:01:48Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.