Multiple resolution residual network for automatic thoracic
organs-at-risk segmentation from CT
- URL: http://arxiv.org/abs/2005.13690v2
- Date: Sun, 31 May 2020 22:50:43 GMT
- Title: Multiple resolution residual network for automatic thoracic
organs-at-risk segmentation from CT
- Authors: Hyemin Um, Jue Jiang, Maria Thor, Andreas Rimner, Leo Luo, Joseph O.
Deasy, and Harini Veeraraghavan
- Abstract summary: We implement and evaluate a multiple resolution residual network (MRRN) for multiple normal organs-at-risk (OAR) segmentation from computed tomography (CT) images.
Our approach simultaneously combines feature streams computed at multiple image resolutions and feature levels through residual connections.
We trained our approach using 206 thoracic CT scans of lung cancer patients with 35 scans held out for validation to segment the left and right lungs, heart, esophagus, and spinal cord.
- Score: 2.9023633922848586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We implemented and evaluated a multiple resolution residual network (MRRN)
for multiple normal organs-at-risk (OAR) segmentation from computed tomography
(CT) images for thoracic radiotherapy treatment (RT) planning. Our approach
simultaneously combines feature streams computed at multiple image resolutions
and feature levels through residual connections. The feature streams at each
level are updated as the images are passed through various feature levels. We
trained our approach using 206 thoracic CT scans of lung cancer patients with
35 scans held out for validation to segment the left and right lungs, heart,
esophagus, and spinal cord. This approach was tested on 60 CT scans from the
open-source AAPM Thoracic Auto-Segmentation Challenge dataset. Performance was
measured using the Dice Similarity Coefficient (DSC). Our approach outperformed
the best-performing method in the grand challenge for hard-to-segment
structures like the esophagus and achieved comparable results for all other
structures. Median DSC using our method was 0.97 (interquartile range [IQR]:
0.97-0.98) for the left and right lungs, 0.93 (IQR: 0.93-0.95) for the heart,
0.78 (IQR: 0.76-0.80) for the esophagus, and 0.88 (IQR: 0.86-0.89) for the
spinal cord.
Related papers
- Multi-Layer Feature Fusion with Cross-Channel Attention-Based U-Net for Kidney Tumor Segmentation [0.0]
U-Net based deep learning techniques are emerging as a promising approach for automated medical image segmentation.
We present an improved U-Net based model for end-to-end automated semantic segmentation of CT scan images to identify renal tumors.
arXiv Detail & Related papers (2024-10-20T19:02:41Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - A Deep Learning Based Workflow for Detection of Lung Nodules With Chest
Radiograph [0.0]
We built a segmentation model to identify lung areas from CXRs, and sliced them into 16 patches.
These labeled patches were then used to train finetune a deep neural network(DNN) model, classifying the patches as positive or negative.
arXiv Detail & Related papers (2021-12-19T16:19:46Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Rapid quantification of COVID-19 pneumonia burden from computed
tomography with convolutional LSTM networks [1.0072268949897432]
We propose a new fully automated deep learning framework for rapid quantification and differentiation between lung lesions in COVID-19 pneumonia.
The performance of the method was evaluated on CT data sets from 197 patients with positive reverse transcription polymerase chain reaction test result for SARS-CoV-2.
arXiv Detail & Related papers (2021-03-31T22:09:14Z) - Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation
Using Physics-Based Data Augmentation [4.3971310109651665]
In current clinical practice, noisy and artifact-ridden weekly cone-beam computed tomography (CBCT) images are only used for patient setup during radiotherapy.
Treatment planning is done once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures.
If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment and for deriving biomarkers for treatment response.
arXiv Detail & Related papers (2021-03-09T19:51:44Z) - Automated Identification of Thoracic Pathology from Chest Radiographs
with Enhanced Training Pipeline [0.0]
We use the currently largest publicly available dataset ChestX-ray14 of 112, chest radiographs of 30,805 patients.
Each image was annotated with either a 'NoFinding' class, or one or more of 14 thoracic pathology labels.
We encoded labels as binary vectors using k-hot encoding.
arXiv Detail & Related papers (2020-06-11T20:43:09Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - JCS: An Explainable COVID-19 Diagnosis System by Joint Classification
and Segmentation [95.57532063232198]
coronavirus disease 2019 (COVID-19) has caused a pandemic disease in over 200 countries.
To control the infection, identifying and separating the infected people is the most crucial step.
This paper develops a novel Joint Classification and (JCS) system to perform real-time and explainable COVID-19 chest CT diagnosis.
arXiv Detail & Related papers (2020-04-15T12:30:40Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.