GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease
Localization in X-ray images
- URL: http://arxiv.org/abs/2107.06442v1
- Date: Wed, 14 Jul 2021 01:27:07 GMT
- Title: GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease
Localization in X-ray images
- Authors: Baolian Qi, Gangming Zhao, Xin Wei, Chaowei Fang, Chengwei Pan,
Jinpeng Li, Huiguang He, and Licheng Jiao
- Abstract summary: Cross-region and cross-image relationship, as contextual and compensating information, is vital to obtain more consistent and integral regions.
We propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images.
By means of this, our approach achieves the state-of-the-art result on NIH chest X-ray dataset for weakly-supervised disease localization.
- Score: 35.18562405272593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Locating diseases in chest X-ray images with few careful annotations saves
large human effort. Recent works approached this task with innovative
weakly-supervised algorithms such as multi-instance learning (MIL) and class
activation maps (CAM), however, these methods often yield inaccurate or
incomplete regions. One of the reasons is the neglection of the pathological
implications hidden in the relationship across anatomical regions within each
image and the relationship across images. In this paper, we argue that the
cross-region and cross-image relationship, as contextual and compensating
information, is vital to obtain more consistent and integral regions. To model
the relationship, we propose the Graph Regularized Embedding Network (GREN),
which leverages the intra-image and inter-image information to locate diseases
on chest X-ray images. GREN uses a pre-trained U-Net to segment the lung lobes,
and then models the intra-image relationship between the lung lobes using an
intra-image graph to compare different regions. Meanwhile, the relationship
between in-batch images is modeled by an inter-image graph to compare multiple
images. This process mimics the training and decision-making process of a
radiologist: comparing multiple regions and images for diagnosis. In order for
the deep embedding layers of the neural network to retain structural
information (important in the localization task), we use the Hash coding and
Hamming distance to compute the graphs, which are used as regularizers to
facilitate training. By means of this, our approach achieves the
state-of-the-art result on NIH chest X-ray dataset for weakly-supervised
disease localization. Our codes are accessible online.
Related papers
- Class Attention to Regions of Lesion for Imbalanced Medical Image
Recognition [59.28732531600606]
We propose a framework named textbfClass textbfAttention to textbfREgions of the lesion (CARE) to handle data imbalance issues.
The CARE framework needs bounding boxes to represent the lesion regions of rare diseases.
Results show that the CARE variants with automated bounding box generation are comparable to the original CARE framework.
arXiv Detail & Related papers (2023-07-19T15:19:02Z) - CheXRelNet: An Anatomy-Aware Model for Tracking Longitudinal
Relationships between Chest X-Rays [2.9212099078191764]
We propose CheXRelNet, a neural model that can track longitudinal pathology change relations between two Chest X-rays.
CheXRelNet incorporates local and global visual features, utilizes inter-image and intra-image anatomical information, and learns dependencies between anatomical region attributes.
arXiv Detail & Related papers (2022-08-08T02:22:09Z) - Context-aware Self-supervised Learning for Medical Images Using Graph
Neural Network [24.890564475121238]
We introduce a novel approach with two levels of self-supervised representation learning objectives.
We use graph neural networks to incorporate the relationship between different anatomical regions.
The structure of the graph is informed by anatomical correspondences between each patient and an anatomical atlas.
arXiv Detail & Related papers (2022-07-06T20:30:12Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Cross Chest Graph for Disease Diagnosis with Structural Relational
Reasoning [2.7148274921314615]
Locating lesions is important in the computer-aided diagnosis of X-ray images.
General weakly-supervised methods have failed to consider the characteristics of X-ray images.
We propose the Cross-chest Graph (CCG), which improves the performance of automatic lesion detection.
arXiv Detail & Related papers (2021-01-22T08:24:04Z) - Context Matters: Graph-based Self-supervised Representation Learning for
Medical Images [21.23065972218941]
We introduce a novel approach with two levels of self-supervised representation learning objectives.
We use graph neural networks to incorporate the relationship between different anatomical regions.
Our model can identify clinically relevant regions in the images.
arXiv Detail & Related papers (2020-12-11T16:26:07Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.