MSTT-199: MRI Dataset for Musculoskeletal Soft Tissue Tumor Segmentation
- URL: http://arxiv.org/abs/2409.03110v1
- Date: Wed, 4 Sep 2024 22:33:17 GMT
- Title: MSTT-199: MRI Dataset for Musculoskeletal Soft Tissue Tumor Segmentation
- Authors: Tahsin Reasat, Stephen Chenard, Akhil Rekulapelli, Nicholas Chadwick, Joanna Shechtel, Katherine van Schaik, David S. Smith, Joshua Lawrenz,
- Abstract summary: We describe the collection of an MR imaging dataset of 199 musculoskeletal soft tissue tumors from 199 patients.
We trained segmentation models on this dataset and then benchmarked them on a publicly available dataset.
Our model achieved the state-of-the-art dice score of 0.79 out of the box without any fine tuning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate musculoskeletal soft tissue tumor segmentation is vital for assessing tumor size, location, diagnosis, and response to treatment, thereby influencing patient outcomes. However, segmentation of these tumors requires clinical expertise, and an automated segmentation model would save valuable time for both clinician and patient. Training an automatic model requires a large dataset of annotated images. In this work, we describe the collection of an MR imaging dataset of 199 musculoskeletal soft tissue tumors from 199 patients. We trained segmentation models on this dataset and then benchmarked them on a publicly available dataset. Our model achieved the state-of-the-art dice score of 0.79 out of the box without any fine tuning, which shows the diversity and utility of our curated dataset. We analyzed the model predictions and found that its performance suffered on fibrous and vascular tumors due to their diverse anatomical location, size, and intensity heterogeneity. The code and models are available in the following github repository, https://github.com/Reasat/mstt
Related papers
- ScaleMAI: Accelerating the Development of Trusted Datasets and AI Models [46.80682547774335]
We propose ScaleMAI, an agent of AI-integrated data curation and annotation.
First, ScaleMAI creates a dataset of 25,362 CT scans, including per-voxel annotations for benign/malignant tumors and 24 anatomical structures.
Second, through progressive human-in-the-loop iterations, ScaleMAI provides Flagship AI Model that can approach the proficiency of expert annotators in detecting pancreatic tumors.
arXiv Detail & Related papers (2025-01-06T22:12:00Z) - Enhanced segmentation of femoral bone metastasis in CT scans of patients using synthetic data generation with 3D diffusion models [0.06700983301090582]
We propose an automated data pipeline using 3D Denoising Diffusion Probabilistic Models (DDPM) to generalize on new images.
We created 5675 new volumes, then trained 3D U-Net segmentation models on real and synthetic data to compare segmentation performance.
arXiv Detail & Related papers (2024-09-17T09:21:19Z) - Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.76736949127792]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Investigating certain choices of CNN configurations for brain lesion
segmentation [5.148195106469231]
Deep learning models, in particular CNNs, have been a methodology of choice in many applications of medical image analysis including brain tumor segmentation.
We investigated the main design aspects of CNN models for the specific task of MRI-based brain tumor segmentation.
arXiv Detail & Related papers (2022-12-02T15:24:44Z) - Integrative Imaging Informatics for Cancer Research: Workflow Automation
for Neuro-oncology (I3CR-WANO) [0.12175619840081271]
We propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-Oncology MRI data.
Our end-to-end framework i) classifies MRI sequences using an ensemble classifier, ii) preprocesses the data in a reproducible manner, and iv) delineates tumor tissue subtypes.
It is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists.
arXiv Detail & Related papers (2022-10-06T18:23:42Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - A Novel Mask R-CNN Model to Segment Heterogeneous Brain Tumors through
Image Subtraction [0.0]
We propose using a method performed by radiologists called image segmentation and applying it to machine learning models to prove a better segmentation.
Using Mask R-CNN, its ResNet backbone being pre-trained on the RSNA pneumonia detection challenge dataset, we can train a model on the Brats 2020 Brain Tumor dataset.
We can see how well the method of image subtraction works by comparing it to models without image subtraction through DICE coefficient (F1 score), recall, and precision on the untouched test set.
arXiv Detail & Related papers (2022-04-04T01:45:11Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.