Weakly Supervised 3D Classification of Chest CT using Aggregated
Multi-Resolution Deep Segmentation Features
- URL: http://arxiv.org/abs/2011.00149v1
- Date: Sat, 31 Oct 2020 00:16:53 GMT
- Title: Weakly Supervised 3D Classification of Chest CT using Aggregated
Multi-Resolution Deep Segmentation Features
- Authors: Anindo Saha, Fakrul I. Tushar, Khrystyna Faryna, Vincent M.
D'Anniballe, Rui Hou, Maciej A. Mazurowski, Geoffrey D. Rubin, Joseph Y. Lo
- Abstract summary: Weakly supervised disease classification of CT imaging suffers from poor localization owing to case-level annotations.
We propose a medical classifier that leverages semantic structural concepts learned via multi-resolution segmentation feature maps.
- Score: 5.938730586521215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weakly supervised disease classification of CT imaging suffers from poor
localization owing to case-level annotations, where even a positive scan can
hold hundreds to thousands of negative slices along multiple planes.
Furthermore, although deep learning segmentation and classification models
extract distinctly unique combinations of anatomical features from the same
target class(es), they are typically seen as two independent processes in a
computer-aided diagnosis (CAD) pipeline, with little to no feature reuse. In
this research, we propose a medical classifier that leverages the semantic
structural concepts learned via multi-resolution segmentation feature maps, to
guide weakly supervised 3D classification of chest CT volumes. Additionally, a
comparative analysis is drawn across two different types of feature aggregation
to explore the vast possibilities surrounding feature fusion. Using a dataset
of 1593 scans labeled on a case-level basis via rule-based model, we train a
dual-stage convolutional neural network (CNN) to perform organ segmentation and
binary classification of four representative diseases (emphysema,
pneumonia/atelectasis, mass and nodules) in lungs. The baseline model, with
separate stages for segmentation and classification, results in AUC of 0.791.
Using identical hyperparameters, the connected architecture using static and
dynamic feature aggregation improves performance to AUC of 0.832 and 0.851,
respectively. This study advances the field in two key ways. First, case-level
report data is used to weakly supervise a 3D CT classifier of multiple,
simultaneous diseases for an organ. Second, segmentation and classification
models are connected with two different feature aggregation strategies to
enhance the classification performance.
Related papers
- Multi-Modality Multi-Scale Cardiovascular Disease Subtypes
Classification Using Raman Image and Medical History [2.9315342447802317]
We propose a multi-modality multi-scale model called M3S, which is a novel deep learning method with two core modules to address these issues.
First, we convert RS data to various resolution images by the Gramian angular field (GAF) to enlarge nuance, and a two-branch structure is leveraged to get embeddings for distinction.
Second, a probability matrix and a weight matrix are used to enhance the classification capacity by combining the RS and medical history data.
arXiv Detail & Related papers (2023-04-18T22:09:16Z) - CLIP-Driven Universal Model for Organ Segmentation and Tumor Detection [36.08551407926805]
We propose the CLIP-Driven Universal Model, which incorporates text embedding learned from Contrastive Language-Image Pre-training to segmentation models.
The proposed model is developed from an assembly of 14 datasets, using a total of 3,410 CT scans for training and then evaluated on 6,162 external CT scans from 3 additional datasets.
arXiv Detail & Related papers (2023-01-02T18:07:44Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - A persistent homology-based topological loss for CNN-based multi-class
segmentation of CMR [5.898114915426535]
Multi-class segmentation of cardiac magnetic resonance (CMR) images seeks a separation of data into anatomical components with known structure and configuration.
Most popular CNN-based methods are optimised using pixel wise loss functions, ignorant of the spatially extended features that characterise anatomy.
We extend these approaches to the task of multi-class segmentation by building an enriched topological description of all class labels and class label pairs.
arXiv Detail & Related papers (2021-07-27T09:21:38Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Classifying Breast Histopathology Images with a Ductal Instance-Oriented
Pipeline [10.605775819074886]
The duct-level segmenter tries to identify each ductal individual inside a microscopic image.
It then extracts tissue-level information from the identified ductal instances.
The proposed DIOP only takes a few seconds to run in the inference time.
arXiv Detail & Related papers (2020-12-11T05:43:12Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.