Convolutional Neural Networks for Segmentation of Malignant Pleural
Mesothelioma: Analysis of Probability Map Thresholds (CALGB 30901, Alliance)
- URL: http://arxiv.org/abs/2312.00223v1
- Date: Thu, 30 Nov 2023 22:07:07 GMT
- Title: Convolutional Neural Networks for Segmentation of Malignant Pleural
Mesothelioma: Analysis of Probability Map Thresholds (CALGB 30901, Alliance)
- Authors: Mena Shenouda, Eyj\'olfur Gudmundsson, Feng Li, Christopher M. Straus,
Hedy L. Kindler, Arkadiusz Z. Dudek, Thomas Stinchcombe, Xiaofei Wang, Adam
Starkey, Samuel G. Armato III
- Abstract summary: Automated segmentation methods using deep learning can be employed to acquire volume.
The purpose of this study was to evaluate the impact of probability map threshold on MPM tumor delineations generated using a convolutional neural network (CNN)
CNN annotations consistently yielded smaller tumor volumes than radiologist contours.
- Score: 3.5543234184232566
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Malignant pleural mesothelioma (MPM) is the most common form of mesothelioma.
To assess response to treatment, tumor measurements are acquired and evaluated
based on a patient's longitudinal computed tomography (CT) scans. Tumor volume,
however, is the more accurate metric for assessing tumor burden and response.
Automated segmentation methods using deep learning can be employed to acquire
volume, which otherwise is a tedious task performed manually. The deep
learning-based tumor volume and contours can then be compared with a standard
reference to assess the robustness of the automated segmentations. The purpose
of this study was to evaluate the impact of probability map threshold on MPM
tumor delineations generated using a convolutional neural network (CNN).
Eighty-eight CT scans from 21 MPM patients were segmented by a VGG16/U-Net CNN.
A radiologist modified the contours generated at a 0.5 probability threshold.
Percent difference of tumor volume and overlap using the Dice Similarity
Coefficient (DSC) were compared between the standard reference provided by the
radiologist and CNN outputs for thresholds ranging from 0.001 to 0.9. CNN
annotations consistently yielded smaller tumor volumes than radiologist
contours. Reducing the probability threshold from 0.5 to 0.1 decreased the
absolute percent volume difference, on average, from 43.96% to 24.18%. Median
and mean DSC ranged from 0.58 to 0.60, with a peak at a threshold of 0.5; no
distinct threshold was found for percent volume difference. No single output
threshold in the CNN probability maps was optimal for both tumor volume and
DSC. This work underscores the need to assess tumor volume and spatial overlap
when evaluating CNN performance. While automated segmentations may yield
comparable tumor volumes to that of the reference standard, the spatial region
delineated by the CNN at a specific threshold is equally important.
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Training and Comparison of nnU-Net and DeepMedic Methods for
Autosegmentation of Pediatric Brain Tumors [0.08519384144663283]
Two deep learning-based 3D segmentation models, DeepMedic and nnU-Net, were compared.
Pediatric-specific data trained nnU-Net model is superior to DeepMedic for whole tumor and subregion segmentation of pediatric brain tumors.
arXiv Detail & Related papers (2024-01-16T14:44:06Z) - Segmentation of glioblastomas in early post-operative multi-modal MRI
with deep neural networks [33.51490233427579]
Two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task.
The best performance achieved was a 61% Dice score, and the best classification performance was about 80% balanced accuracy.
The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection.
arXiv Detail & Related papers (2023-04-18T10:14:45Z) - Multi-class Brain Tumor Segmentation using Graph Attention Network [3.3635982995145994]
This work introduces an efficient brain tumor summation model by exploiting the advancement in MRI and graph neural networks (GNNs)
The model represents the volumetric MRI as a region adjacency graph (RAG) and learns to identify the type of tumors through a graph attention network (GAT)
arXiv Detail & Related papers (2023-02-11T04:30:40Z) - Investigating certain choices of CNN configurations for brain lesion
segmentation [5.148195106469231]
Deep learning models, in particular CNNs, have been a methodology of choice in many applications of medical image analysis including brain tumor segmentation.
We investigated the main design aspects of CNN models for the specific task of MRI-based brain tumor segmentation.
arXiv Detail & Related papers (2022-12-02T15:24:44Z) - Improving Deep Learning Models for Pediatric Low-Grade Glioma Tumors
Molecular Subtype Identification Using 3D Probability Distributions of Tumor
Location [0.0]
CNN models for pLGG subtype identification rely on tumor segmentation.
We propose to augment the CNN models using tumor location probability in MRI data.
We achieved statistically significant improvements by incorporating tumor location into the CNN models.
arXiv Detail & Related papers (2022-10-13T18:30:11Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Multi-Scale Input Strategies for Medulloblastoma Tumor Classification
using Deep Transfer Learning [59.30734371401316]
Medulloblastoma is the most common malignant brain cancer among children.
CNN has shown promising results for MB subtype classification.
We study the impact of tile size and input strategy.
arXiv Detail & Related papers (2021-09-14T09:42:37Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.