Deep Learning Segmentation of Ascites on Abdominal CT Scans for Automatic Volume Quantification
- URL: http://arxiv.org/abs/2406.15979v1
- Date: Sun, 23 Jun 2024 01:32:53 GMT
- Title: Deep Learning Segmentation of Ascites on Abdominal CT Scans for Automatic Volume Quantification
- Authors: Benjamin Hou, Sung-Won Lee, Jung-Min Lee, Christopher Koh, Jing Xiao, Perry J. Pickhardt, Ronald M. Summers,
- Abstract summary: This retrospective study included contrast-enhanced and non-contrast abdominal-pelvic CT scans of patients with cirrhotic ascites and patients with ovarian cancer.
The model was trained on The Cancer Genome Atlas Ovarian Cancer dataset (mean age, 60 years +/- 11 [s.d.]; 143 female)
Its performance was measured by the Dice coefficient, standard deviations, and 95% confidence intervals, focusing on ascites volume in the peritoneal cavity.
- Score: 12.25110399510034
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Purpose: To evaluate the performance of an automated deep learning method in detecting ascites and subsequently quantifying its volume in patients with liver cirrhosis and ovarian cancer. Materials and Methods: This retrospective study included contrast-enhanced and non-contrast abdominal-pelvic CT scans of patients with cirrhotic ascites and patients with ovarian cancer from two institutions, National Institutes of Health (NIH) and University of Wisconsin (UofW). The model, trained on The Cancer Genome Atlas Ovarian Cancer dataset (mean age, 60 years +/- 11 [s.d.]; 143 female), was tested on two internal (NIH-LC and NIH-OV) and one external dataset (UofW-LC). Its performance was measured by the Dice coefficient, standard deviations, and 95% confidence intervals, focusing on ascites volume in the peritoneal cavity. Results: On NIH-LC (25 patients; mean age, 59 years +/- 14 [s.d.]; 14 male) and NIH-OV (166 patients; mean age, 65 years +/- 9 [s.d.]; all female), the model achieved Dice scores of 0.855 +/- 0.061 (CI: 0.831-0.878) and 0.826 +/- 0.153 (CI: 0.764-0.887), with median volume estimation errors of 19.6% (IQR: 13.2-29.0) and 5.3% (IQR: 2.4-9.7) respectively. On UofW-LC (124 patients; mean age, 46 years +/- 12 [s.d.]; 73 female), the model had a Dice score of 0.830 +/- 0.107 (CI: 0.798-0.863) and median volume estimation error of 9.7% (IQR: 4.5-15.1). The model showed strong agreement with expert assessments, with r^2 values of 0.79, 0.98, and 0.97 across the test sets. Conclusion: The proposed deep learning method performed well in segmenting and quantifying the volume of ascites in concordance with expert radiologist assessments.
Related papers
- Explainable Admission-Level Predictive Modeling for Prolonged Hospital Stay in Elderly Populations: Challenges in Low- and Middle-Income Countries [65.4286079244589]
Prolonged length of stay (pLoS) is a significant factor associated with the risk of adverse in-hospital events.<n>We develop and explain a predictive model for pLos using admission-level patient and hospital administrative data.
arXiv Detail & Related papers (2026-01-07T23:35:24Z) - Curriculum Learning with Synthetic Data for Enhanced Pulmonary Nodule Detection in Chest Radiographs [0.0]
This study evaluates whether integrating curriculum learning with synthetic augmentation can enhance the detection of difficult pulmonary nodules.<n>A Faster R-CNN with a Feature Pyramid Network (FPN) backbone was trained on a hybrid dataset.
arXiv Detail & Related papers (2025-10-09T02:06:13Z) - Multi-Centre Validation of a Deep Learning Model for Scoliosis Assessment [0.0]
We conducted a retrospective, multi centre evaluation of a fully automated deep learning software (Carebot AI Bones, Spine Measurement functionality; Carebot s.r.o.)<n>On 103 standing anteroposterior whole spine radiographs collected from ten hospitals.<n>Two musculoskeletal radiologists independently measured each study and served as reference readers.
arXiv Detail & Related papers (2025-07-18T17:21:53Z) - Deep learning-based auto-contouring of organs/structures-at-risk for pediatric upper abdominal radiotherapy [0.0]
The aim was to develop a CT-based multi-organ segmentation model for delineating organs-at-risk (OARs) in pediatric upper abdominal tumors.
Performance was assessed with Dice Similarity Coefficient (DSC), 95% Hausdorff Distance (HD95), and mean surface distance (MSD)
Model-PMC-UMCU achieved mean DSC values above 0.95 for five of nine OARs, while spleen and heart ranged between 0.90 and 0.95.
The stomach-bowel and pancreas exhibited DSC values below 0.90.
arXiv Detail & Related papers (2024-11-01T13:54:31Z) - Deep Radiomics Detection of Clinically Significant Prostate Cancer on Multicenter MRI: Initial Comparison to PI-RADS Assessment [0.0]
This study analyzed biparametric (T2W and DW) prostate MRI sequences of 615 patients (mean age, 63.1 +/- 7 years) from four datasets acquired between 2010 and 2020.
Deep radiomics machine learning model achieved comparable performance to PI-RADS assessment in csPCa detection at the patient-level but not at the lesion-level.
arXiv Detail & Related papers (2024-10-21T17:41:58Z) - Incorporating Anatomical Awareness for Enhanced Generalizability and Progression Prediction in Deep Learning-Based Radiographic Sacroiliitis Detection [0.8248058061511542]
The aim of this study was to examine whether incorporating anatomical awareness into a deep learning model can improve generalizability and enable prediction of disease progression.
The performance of the models was compared using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity.
arXiv Detail & Related papers (2024-05-12T20:02:25Z) - Detection of subclinical atherosclerosis by image-based deep learning on chest x-ray [86.38767955626179]
Deep-learning algorithm to predict coronary artery calcium (CAC) score was developed on 460 chest x-ray.
The diagnostic accuracy of the AICAC model assessed by the area under the curve (AUC) was the primary outcome.
arXiv Detail & Related papers (2024-03-27T16:56:14Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - DeepCOVID-Fuse: A Multi-modality Deep Learning Model Fusing Chest
X-Radiographs and Clinical Variables to Predict COVID-19 Risk Levels [8.593516170110203]
DeepCOVID-Fuse is a deep learning fusion model to predict risk levels in coronavirus patients.
The accuracy of DeepCOVID-Fuse trained on CXRs and clinical variables is 0.658, with an AUC of 0.842.
arXiv Detail & Related papers (2023-01-20T20:54:25Z) - A Generalizable Artificial Intelligence Model for COVID-19
Classification Task Using Chest X-ray Radiographs: Evaluated Over Four
Clinical Datasets with 15,097 Patients [6.209420804714487]
The generalizability of the trained model was retrospectively evaluated using four different real-world clinical datasets.
The AI model trained using a single-source clinical dataset achieved an AUC of 0.82 when applied to the internal temporal test set.
An AUC of 0.79 was achieved when applied to a multi-institutional COVID-19 dataset collected by the Medical Imaging and Data Resource Center.
arXiv Detail & Related papers (2022-10-04T04:12:13Z) - Deep learning-based COVID-19 pneumonia classification using chest CT
images: model generalizability [54.86482395312936]
Deep learning (DL) classification models were trained to identify COVID-19-positive patients on 3D computed tomography (CT) datasets from different countries.
We trained nine identical DL-based classification models by using combinations of the datasets with a 72% train, 8% validation, and 20% test data split.
The models trained on multiple datasets and evaluated on a test set from one of the datasets used for training performed better.
arXiv Detail & Related papers (2021-02-18T21:14:52Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z) - Severity Assessment of Coronavirus Disease 2019 (COVID-19) Using
Quantitative Features from Chest CT Images [54.919022945740515]
The aim of this study is to realize automatic severity assessment (non-severe or severe) of COVID-19 based on chest CT images.
A random forest (RF) model is trained to assess the severity (non-severe or severe) based on quantitative features.
Several quantitative features, which have the potential to reflect the severity of COVID-19, were revealed.
arXiv Detail & Related papers (2020-03-26T15:49:32Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.