Developing a Machine Learning-Based Clinical Decision Support Tool for
Uterine Tumor Imaging
- URL: http://arxiv.org/abs/2308.10372v1
- Date: Sun, 20 Aug 2023 21:46:05 GMT
- Title: Developing a Machine Learning-Based Clinical Decision Support Tool for
Uterine Tumor Imaging
- Authors: Darryl E. Wright, Adriana V. Gregory, Deema Anaam, Sepideh Yadollahi,
Sumana Ramanathan, Kafayat A. Oyemade, Reem Alsibai, Heather Holmes, Harrison
Gottlich, Cherie-Akilah G. Browne, Sarah L. Cohen Rassier, Isabel Green,
Elizabeth A. Stewart, Hiroaki Takahashi, Bohyun Kim, Shannon
Laughlin-Tommaso, Timothy L. Kline
- Abstract summary: Uterine leiomyosarcoma (LMS) is a rare but aggressive malignancy.
It is difficult to differentiate LMS from degenerated leiomyoma (LM), a prevalent but benign condition.
We curated a data set of 115 axial T2-weighted MRI images from 110 patients with UTs that included five different tumor types.
We applied nnU-Net and explored the effect of training set size on performance by randomly generating subsets with 25, 45, 65 and 85 training set images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uterine leiomyosarcoma (LMS) is a rare but aggressive malignancy. On imaging,
it is difficult to differentiate LMS from, for example, degenerated leiomyoma
(LM), a prevalent but benign condition. We curated a data set of 115 axial
T2-weighted MRI images from 110 patients (mean [range] age=45 [17-81] years)
with UTs that included five different tumor types. These data were randomly
split stratifying on tumor volume into training (n=85) and test sets (n=30). An
independent second reader (reader 2) provided manual segmentations for all test
set images. To automate segmentation, we applied nnU-Net and explored the
effect of training set size on performance by randomly generating subsets with
25, 45, 65 and 85 training set images. We evaluated the ability of radiomic
features to distinguish between types of UT individually and when combined
through feature selection and machine learning. Using the entire training set
the mean [95% CI] fibroid DSC was measured as 0.87 [0.59-1.00] and the
agreement between the two readers was 0.89 [0.77-1.0] on the test set. When
classifying degenerated LM from LMS we achieve a test set F1-score of 0.80.
Classifying UTs based on radiomic features we identify classifiers achieving
F1-scores of 0.53 [0.45, 0.61] and 0.80 [0.80, 0.80] on the test set for the
benign versus malignant, and degenerated LM versus LMS tasks. We show that it
is possible to develop an automated method for 3D segmentation of the uterus
and UT that is close to human-level performance with fewer than 150 annotated
images. For distinguishing UT types, while we train models that merit further
investigation with additional data, reliable automatic differentiation of UTs
remains a challenge.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Quantifying uncertainty in lung cancer segmentation with foundation models applied to mixed-domain datasets [6.712251433139412]
Medical image foundation models have shown the ability to segment organs and tumors with minimal fine-tuning.
These models are typically evaluated on task-specific in-distribution (ID) datasets.
We introduced a comprehensive set of computationally fast metrics to evaluate the performance of multiple foundation models trained with self-supervised learning (SSL)
SMIT produced a highest F1-score (LRAD: 0.60, 5Rater: 0.64) and lowest entropy (LRAD: 0.06, 5Rater: 0.12), indicating higher tumor detection rate and confident segmentations.
arXiv Detail & Related papers (2024-03-19T19:36:48Z) - Training and Comparison of nnU-Net and DeepMedic Methods for
Autosegmentation of Pediatric Brain Tumors [0.08519384144663283]
Two deep learning-based 3D segmentation models, DeepMedic and nnU-Net, were compared.
Pediatric-specific data trained nnU-Net model is superior to DeepMedic for whole tumor and subregion segmentation of pediatric brain tumors.
arXiv Detail & Related papers (2024-01-16T14:44:06Z) - Self-supervised contrastive learning of echocardiogram videos enables
label-efficient cardiac disease diagnosis [48.64462717254158]
We developed a self-supervised contrastive learning approach, EchoCLR, to catered to echocardiogram videos.
When fine-tuned on small portions of labeled data, EchoCLR pretraining significantly improved classification performance for left ventricular hypertrophy (LVH) and aortic stenosis (AS)
EchoCLR is unique in its ability to learn representations of medical videos and demonstrates that SSL can enable label-efficient disease classification from small, labeled datasets.
arXiv Detail & Related papers (2022-07-23T19:17:26Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Deep Learning for fully automatic detection, segmentation, and Gleason
Grade estimation of prostate cancer in multiparametric Magnetic Resonance
Images [0.731365367571807]
This paper proposes a fully automatic system based on Deep Learning that takes a prostate mpMRI from a PCa-suspect patient.
It locates PCa lesions, segments them, and predicts their most likely Gleason grade group (GGG)
The code for the ProstateX-trained system has been made openly available at https://github.com/OscarPellicer/prostate_lesion_detection.
arXiv Detail & Related papers (2021-03-23T16:08:43Z) - Comparison of different CNNs for breast tumor classification from
ultrasound images [12.98780709853981]
classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task.
We compared different Convolutional Neural Networks (CNNs) and transfer learning methods for the task of automated breast tumor classification.
The best performance was obtained by fine tuning VGG-16, with an accuracy of 0.919 and an AUC of 0.934.
arXiv Detail & Related papers (2020-12-28T22:54:08Z) - Automatic classification of multiple catheters in neonatal radiographs
with deep learning [2.256008196530956]
We develop and evaluate a deep learning algorithm to classify multiple catheters on neonatal chest and abdominal radiographs.
A convolutional neural network (CNN) was trained using a dataset of 777 neonatal chest and abdominal radiographs.
arXiv Detail & Related papers (2020-11-14T21:27:21Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.