Anatomy X-Net: A Semi-Supervised Anatomy Aware Convolutional Neural
Network for Thoracic Disease Classification
- URL: http://arxiv.org/abs/2106.05915v1
- Date: Thu, 10 Jun 2021 17:01:23 GMT
- Title: Anatomy X-Net: A Semi-Supervised Anatomy Aware Convolutional Neural
Network for Thoracic Disease Classification
- Authors: Uday Kamal, Mohammad Zunaed, Nusrat Binta Nizam, Taufiq Hasan
- Abstract summary: This work proposes an anatomy-aware attention-based architecture named Anatomy X-Net.
It prioritizes the spatial features guided by the pre-identified anatomy regions.
Our proposed method sets new state-of-the-art performance on the official NIH test set with an AUC score of 0.8439.
- Score: 3.888080947524813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Thoracic disease detection from chest radiographs using deep learning methods
has been an active area of research in the last decade. Most previous methods
attempt to focus on the diseased organs of the image by identifying spatial
regions responsible for significant contributions to the model's prediction. In
contrast, expert radiologists first locate the prominent anatomical structures
before determining if those regions are anomalous. Therefore, integrating
anatomical knowledge within deep learning models could bring substantial
improvement in automatic disease classification. This work proposes an
anatomy-aware attention-based architecture named Anatomy X-Net, that
prioritizes the spatial features guided by the pre-identified anatomy regions.
We leverage a semi-supervised learning method using the JSRT dataset containing
organ-level annotation to obtain the anatomical segmentation masks (for lungs
and heart) for the NIH and CheXpert datasets. The proposed Anatomy X-Net uses
the pre-trained DenseNet-121 as the backbone network with two corresponding
structured modules, the Anatomy Aware Attention (AAA) and Probabilistic
Weighted Average Pooling (PWAP), in a cohesive framework for anatomical
attention learning. Our proposed method sets new state-of-the-art performance
on the official NIH test set with an AUC score of 0.8439, proving the efficacy
of utilizing the anatomy segmentation knowledge to improve the thoracic disease
classification. Furthermore, the Anatomy X-Net yields an averaged AUC of 0.9020
on the Stanford CheXpert dataset, improving on existing methods that
demonstrate the generalizability of the proposed framework.
Related papers
- Anatomy-guided Pathology Segmentation [56.883822515800205]
We develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features.
Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy.
In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods.
arXiv Detail & Related papers (2024-07-08T11:44:15Z) - Teaching AI the Anatomy Behind the Scan: Addressing Anatomical Flaws in Medical Image Segmentation with Learnable Prior [34.54360931760496]
Key anatomical features, such as the number of organs, their shapes and relative positions, are crucial for building a robust multi-organ segmentation model.
We introduce a novel architecture called the Anatomy-Informed Network (AIC-Net)
AIC-Net incorporates a learnable input termed "Anatomical Prior", which can be adapted to patient-specific anatomy.
arXiv Detail & Related papers (2024-03-27T10:46:24Z) - AG-CRC: Anatomy-Guided Colorectal Cancer Segmentation in CT with
Imperfect Anatomical Knowledge [9.961742312147674]
We develop a novel Anatomy-Guided segmentation framework to exploit the auto-generated organ masks.
We extensively evaluate the proposed method on two CRC segmentation datasets.
arXiv Detail & Related papers (2023-10-07T03:22:06Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - Med-Query: Steerable Parsing of 9-DoF Medical Anatomies with Query
Embedding [15.98677736544302]
We propose a steerable, robust, and efficient computing framework for detection, identification, and segmentation of anatomies in 3D medical data.
Considering complicated shapes, sizes and orientations of anatomies, we present the nine degrees-of-freedom (9-DoF) pose estimation solution in full 3D space.
We have validated the proposed method on three medical imaging parsing tasks of ribs, spine, and abdominal organs.
arXiv Detail & Related papers (2022-12-05T04:04:21Z) - ThoraX-PriorNet: A Novel Attention-Based Architecture Using Anatomical
Prior Probability Maps for Thoracic Disease Classification [2.0319363307774476]
It is known that different thoracic disease lesions are more likely to occur in specific anatomical regions compared to others.
This article aims to incorporate this disease and region-dependent prior probability distribution within a deep learning framework.
arXiv Detail & Related papers (2022-10-06T15:38:02Z) - Seeking Common Ground While Reserving Differences: Multiple Anatomy
Collaborative Framework for Undersampled MRI Reconstruction [49.16058553281751]
We present a novel deep MRI reconstruction framework with both anatomy-shared and anatomy-specific parameterized learners.
Experiments on brain, knee and cardiac MRI datasets demonstrate that three of these learners are able to enhance reconstruction performance via multiple anatomy collaborative learning.
arXiv Detail & Related papers (2022-06-15T08:19:07Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - DeepStationing: Thoracic Lymph Node Station Parsing in CT Scans using
Anatomical Context Encoding and Key Organ Auto-Search [13.642187665173427]
Lymph node station (LNS) delineation from computed tomography (CT) scans is an indispensable step in radiation oncology workflow.
Previous works exploit anatomical priors to infer LNS based on predefined ad-hoc margins.
We formulate it as a deep spatial and contextual parsing problem via encoded anatomical organs.
arXiv Detail & Related papers (2021-09-20T02:32:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.