Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest
X-rays
- URL: http://arxiv.org/abs/2206.12704v1
- Date: Sat, 25 Jun 2022 18:33:27 GMT
- Title: Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest
X-rays
- Authors: Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Kayhan
Batmanghelich
- Abstract summary: We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address weak annotation issues.
Our framework consists of a cascade of two networks, one responsible for identifying anatomical abnormalities and the second responsible for pathological observations.
Our results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in disease and anatomical abnormality localization.
- Score: 17.15666977702355
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Creating a large-scale dataset of abnormality annotation on medical images is
a labor-intensive and costly task. Leveraging weak supervision from readily
available data such as radiology reports can compensate lack of large-scale
data for anomaly detection methods. However, most of the current methods only
use image-level pathological observations, failing to utilize the relevant
anatomy mentions in reports. Furthermore, Natural Language Processing
(NLP)-mined weak labels are noisy due to label sparsity and linguistic
ambiguity. We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address
these issues of weak annotation. Our framework consists of a cascade of two
networks, one responsible for identifying anatomical abnormalities and the
second responsible for pathological observations. The critical component in our
framework is an anatomy-guided attention module that aids the downstream
observation network in focusing on the relevant anatomical regions generated by
the anatomy network. We use Positive Unlabeled (PU) learning to account for the
fact that lack of mention does not necessarily mean a negative label. Our
quantitative and qualitative results on the MIMIC-CXR dataset demonstrate the
effectiveness of AGXNet in disease and anatomical abnormality localization.
Experiments on the NIH Chest X-ray dataset show that the learned feature
representations are transferable and can achieve the state-of-the-art
performances in disease classification and competitive disease localization
results. Our code is available at https://github.com/batmanlab/AGXNet
Related papers
- How Does Pruning Impact Long-Tailed Multi-Label Medical Image
Classifiers? [49.35105290167996]
Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance.
This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification.
arXiv Detail & Related papers (2023-08-17T20:40:30Z) - Class Attention to Regions of Lesion for Imbalanced Medical Image
Recognition [59.28732531600606]
We propose a framework named textbfClass textbfAttention to textbfREgions of the lesion (CARE) to handle data imbalance issues.
The CARE framework needs bounding boxes to represent the lesion regions of rare diseases.
Results show that the CARE variants with automated bounding box generation are comparable to the original CARE framework.
arXiv Detail & Related papers (2023-07-19T15:19:02Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Multi-Label Generalized Zero Shot Learning for the Classification of
Disease in Chest Radiographs [0.7734726150561088]
We propose a zero shot learning network that can simultaneously predict multiple seen and unseen diseases in chest X-ray images.
The network is end-to-end trainable and requires no independent pre-training for the offline feature extractor.
Our network outperforms two strong baselines in terms of recall, precision, f1 score, and area under the receiver operating characteristic curve.
arXiv Detail & Related papers (2021-07-14T09:04:20Z) - OXnet: Omni-supervised Thoracic Disease Detection from Chest X-rays [7.810011959069686]
OXnet is the first deep omni-supervised thoracic disease detection network.
It uses as much available supervision as possible for CXR diagnosis.
It outperforms competitive methods with significant margins.
arXiv Detail & Related papers (2021-04-07T16:12:31Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z) - Localization of Critical Findings in Chest X-Ray without Local
Annotations Using Multi-Instance Learning [0.0]
deep learning models commonly suffer from a lack of explainability.
Deep learning models require locally annotated training data in form of pixel level labels or bounding box coordinates.
In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning.
arXiv Detail & Related papers (2020-01-23T21:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.