Automated segmentation of 3-D body composition on computed tomography
- URL: http://arxiv.org/abs/2112.08968v1
- Date: Thu, 16 Dec 2021 15:38:27 GMT
- Title: Automated segmentation of 3-D body composition on computed tomography
- Authors: Lucy Pu, Syed F. Ashraf, Naciye S Gezer, Iclal Ocak, Rajeev Dhupar
- Abstract summary: Five different body compositions were manually annotated (VAT, SAT, IMAT, SM, and bone)
Ten-fold cross-validation method was used to develop and validate the performance of several convolutional neural networks (CNNs)
Among the three CNN models, UNet demonstrated the best overall performance in jointly segmenting the five body compositions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: To develop and validate a computer tool for automatic and
simultaneous segmentation of body composition depicted on computed tomography
(CT) scans for the following tissues: visceral adipose (VAT), subcutaneous
adipose (SAT), intermuscular adipose (IMAT), skeletal muscle (SM), and bone.
Approach: A cohort of 100 CT scans acquired from The Cancer Imaging Archive
(TCIA) was used - 50 whole-body positron emission tomography (PET)-CTs, 25
chest, and 25 abdominal. Five different body compositions were manually
annotated (VAT, SAT, IMAT, SM, and bone). A training-while-annotating strategy
was used for efficiency. The UNet model was trained using the already annotated
cases. Then, this model was used to enable semi-automatic annotation for the
remaining cases. The 10-fold cross-validation method was used to develop and
validate the performance of several convolutional neural networks (CNNs),
including UNet, Recurrent Residual UNet (R2Unet), and UNet++. A 3-D patch
sampling operation was used when training the CNN models. The separately
trained CNN models were tested to see if they could achieve a better
performance than segmenting them jointly. Paired-samples t-test was used to
test for statistical significance.
Results: Among the three CNN models, UNet demonstrated the best overall
performance in jointly segmenting the five body compositions with a Dice
coefficient of 0.840+/-0.091, 0.908+/-0.067, 0.603+/-0.084, 0.889+/-0.027, and
0.884+/-0.031, and a Jaccard index of 0.734+/-0.119, 0.837+/-0.096,
0.437+/-0.082, 0.800+/-0.042, 0.793+/-0.049, respectively for VAT, SAT, IMAT,
SM, and bone.
Conclusion: There were no significant differences among the CNN models in
segmenting body composition, but jointly segmenting body compositions achieved
a better performance than segmenting them separately.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - MedSegMamba: 3D CNN-Mamba Hybrid Architecture for Brain Segmentation [15.514511820130474]
We develop a 3D patch-based hybrid CNN-Mamba model for subcortical brain segmentation.
Our model's performance was validated against several benchmarks.
arXiv Detail & Related papers (2024-09-12T02:19:19Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Brain Tumor Radiogenomic Classification [1.8276368987462532]
The RSNA-MICCAI brain tumor radiogenomic classification challenge aimed to predict MGMT biomarker status in glioblastoma through binary classification.
The dataset is splitted into three main cohorts: training set, validation set which were used during training, and the testing were only used during final evaluation.
arXiv Detail & Related papers (2024-01-11T10:30:09Z) - VertDetect: Fully End-to-End 3D Vertebral Instance Segmentation Model [0.0]
This paper proposes VertDetect, a fully automated end-to-end 3D vertebral instance segmentation Convolutional Neural Network (CNN) model.
The utilization of a shared CNN backbone provides the detection and segmentation branches of the network with feature maps containing both spinal and vertebral level information.
This model achieved state-of-the-art performance for an end-to-end architecture, whose design facilitates the extraction of features that can be subsequently used for downstream tasks.
arXiv Detail & Related papers (2023-11-16T15:29:21Z) - Attention and Pooling based Sigmoid Colon Segmentation in 3D CT images [11.861208424384046]
The sigmoid colon is a crucial aspect of treating diverticulitis.
This research presents a novel deep learning architecture for segmenting the sigmoid colon from Computed Tomography (CT) images.
arXiv Detail & Related papers (2023-09-25T04:52:46Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - A Deep Learning-Based Approach to Extracting Periosteal and Endosteal
Contours of Proximal Femur in Quantitative CT Images [25.76523855274612]
A three-dimensional (3D) end-to-end fully convolutional neural network was developed for our segmentation task.
Two models with the same network structures were trained and they achieved a dice similarity coefficient (DSC) of 97.87% and 96.49% for the periosteal and endosteal contours, respectively.
It demonstrated a strong potential for clinical use, including the hip fracture risk prediction and finite element analysis.
arXiv Detail & Related papers (2021-02-03T10:23:41Z) - MSED: a multi-modal sleep event detection model for clinical sleep
analysis [62.997667081978825]
We designed a single deep neural network architecture to jointly detect sleep events in a polysomnogram.
The performance of the model was quantified by F1, precision, and recall scores, and by correlating index values to clinical values.
arXiv Detail & Related papers (2021-01-07T13:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.