AnaXNet: Anatomy Aware Multi-label Finding Classification in Chest X-ray
- URL: http://arxiv.org/abs/2105.09937v1
- Date: Thu, 20 May 2021 17:58:02 GMT
- Title: AnaXNet: Anatomy Aware Multi-label Finding Classification in Chest X-ray
- Authors: Nkechinyere N. Agu, Joy T. Wu, Hanqing Chao, Ismini Lourentzou, Arjun
Sharma, Mehdi Moradi, Pingkun Yan, James Hendler
- Abstract summary: We propose a novel multi-label chest X-ray classification model that accurately classifies the image finding and also localizes the findings to their correct anatomical regions.
The latter utilizes graph convolutional networks, which enable our model to learn not only the label dependency but also the relationship between the anatomical regions in the chest X-ray.
- Score: 9.087789790647786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radiologists usually observe anatomical regions of chest X-ray images as well
as the overall image before making a decision. However, most existing deep
learning models only look at the entire X-ray image for classification, failing
to utilize important anatomical information. In this paper, we propose a novel
multi-label chest X-ray classification model that accurately classifies the
image finding and also localizes the findings to their correct anatomical
regions. Specifically, our model consists of two modules, the detection module
and the anatomical dependency module. The latter utilizes graph convolutional
networks, which enable our model to learn not only the label dependency but
also the relationship between the anatomical regions in the chest X-ray. We
further utilize a method to efficiently create an adjacency matrix for the
anatomical regions using the correlation of the label across the different
regions. Detailed experiments and analysis of our results show the
effectiveness of our method when compared to the current state-of-the-art
multi-label chest X-ray image classification methods while also providing
accurate location information.
Related papers
- Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - Employing similarity to highlight differences: On the impact of
anatomical assumptions in chest X-ray registration methods [2.080328156648695]
We develop an anatomically penalized convolutional multi-stage solution on the National Institutes of Health (NIH) data set.
Our method proves to be a natural way to limit the folding percentage of the warp field to 1/6 of the state of the art.
We statistically evaluate the benefits of our method and highlight the limits of currently used metrics for registration of chest X-rays.
arXiv Detail & Related papers (2023-01-23T09:42:49Z) - Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New
Benchmark Study [75.05049024176584]
We present a benchmark study of the long-tailed learning problem in the specific domain of thorax diseases on chest X-rays.
We focus on learning from naturally distributed chest X-ray data, optimizing classification accuracy over not only the common "head" classes, but also the rare yet critical "tail" classes.
The benchmark consists of two chest X-ray datasets for 19- and 20-way thorax disease classification, containing classes with as many as 53,000 and as few as 7 labeled training images.
arXiv Detail & Related papers (2022-08-29T04:34:15Z) - CheXRelNet: An Anatomy-Aware Model for Tracking Longitudinal
Relationships between Chest X-Rays [2.9212099078191764]
We propose CheXRelNet, a neural model that can track longitudinal pathology change relations between two Chest X-rays.
CheXRelNet incorporates local and global visual features, utilizes inter-image and intra-image anatomical information, and learns dependencies between anatomical region attributes.
arXiv Detail & Related papers (2022-08-08T02:22:09Z) - Interpretation of Chest x-rays affected by bullets using deep transfer
learning [0.8189696720657246]
Deep learning in radiology provides the opportunity to classify, detect and segment different diseases automatically.
In the proposed study, we worked on a non-trivial aspect of medical imaging where we classified and localized the X-Rays affected by bullets.
This is the first study on the detection and classification of radiographs affected by bullets using deep learning.
arXiv Detail & Related papers (2022-03-25T05:53:45Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease
Localization in X-ray images [35.18562405272593]
Cross-region and cross-image relationship, as contextual and compensating information, is vital to obtain more consistent and integral regions.
We propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images.
By means of this, our approach achieves the state-of-the-art result on NIH chest X-ray dataset for weakly-supervised disease localization.
arXiv Detail & Related papers (2021-07-14T01:27:07Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.