MRAnnotator: A Multi-Anatomy Deep Learning Model for MRI Segmentation
- URL: http://arxiv.org/abs/2402.01031v1
- Date: Thu, 1 Feb 2024 21:43:27 GMT
- Title: MRAnnotator: A Multi-Anatomy Deep Learning Model for MRI Segmentation
- Authors: Alexander Zhou, Zelong Liu, Andrew Tieu, Nikhil Patel, Sean Sun,
Anthony Yang, Peter Choi, Valentin Fauveau, George Soultanidis, Mingqian
Huang, Amish Doshi, Zahi A. Fayad, Timothy Deyer, Xueyan Mei
- Abstract summary: Two datasets were curated and annotated for model development and evaluation.
The developed model achieves robust and generalizable segmentation of 49 anatomic structures on MRI imaging.
- Score: 31.000474738216155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose To develop a deep learning model for multi-anatomy and many-class
segmentation of diverse anatomic structures on MRI imaging.
Materials and Methods In this retrospective study, two datasets were curated
and annotated for model development and evaluation. An internal dataset of 1022
MRI sequences from various clinical sites within a health system and an
external dataset of 264 MRI sequences from an independent imaging center were
collected. In both datasets, 49 anatomic structures were annotated as the
ground truth. The internal dataset was divided into training, validation, and
test sets and used to train and evaluate an nnU-Net model. The external dataset
was used to evaluate nnU-Net model generalizability and performance in all
classes on independent imaging data. Dice scores were calculated to evaluate
model segmentation performance.
Results The model achieved an average Dice score of 0.801 on the internal
test set, and an average score of 0.814 on the complete external dataset across
49 classes.
Conclusion The developed model achieves robust and generalizable segmentation
of 49 anatomic structures on MRI imaging. A future direction is focused on the
incorporation of additional anatomic regions and structures into the datasets
and model.
Related papers
- A novel open-source ultrasound dataset with deep learning benchmarks for
spinal cord injury localization and anatomical segmentation [1.02101998415327]
We present an ultrasound dataset of 10,223-mode (B-mode) images consisting of sagittal slices of porcine spinal cords.
We benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury.
We evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images.
arXiv Detail & Related papers (2024-09-24T20:22:59Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - MRSegmentator: Robust Multi-Modality Segmentation of 40 Classes in MRI and CT Sequences [4.000329151950926]
The model was trained on 1,200 manually annotated MRI scans from the UK Biobank, 221 in-house MRI scans and 1228 CT scans.
It showcased high accuracy in segmenting well-defined organs, achieving Dice Similarity Coefficient (DSC) scores of 0.97 for the right and left lungs, and 0.95 for the heart.
It also demonstrated robustness in organs like the liver (DSC: 0.96) and kidneys (DSC: 0.95 left, 0.95 right), which present more variability.
arXiv Detail & Related papers (2024-05-10T13:15:42Z) - One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts [62.55349777609194]
We aim to build up a model that can Segment Anything in radiology scans, driven by Text prompts, termed as SAT.
We build up the largest and most comprehensive segmentation dataset for training, by collecting over 22K 3D medical image scans.
We have trained SAT-Nano (110M parameters) and SAT-Pro (447M parameters) demonstrating comparable performance to 72 specialist nnU-Nets trained on each dataset/subsets.
arXiv Detail & Related papers (2023-12-28T18:16:00Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Zero-shot performance of the Segment Anything Model (SAM) in 2D medical
imaging: A comprehensive evaluation and practical guidelines [0.13854111346209866]
Segment Anything Model (SAM) harnesses a massive training dataset to segment nearly any object.
Our findings reveal that SAM's zero-shot performance is not only comparable, but in certain cases, surpasses the current state-of-the-art.
We propose practical guidelines that require minimal interaction while consistently yielding robust outcomes.
arXiv Detail & Related papers (2023-04-28T22:07:24Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Deep learning-based COVID-19 pneumonia classification using chest CT
images: model generalizability [54.86482395312936]
Deep learning (DL) classification models were trained to identify COVID-19-positive patients on 3D computed tomography (CT) datasets from different countries.
We trained nine identical DL-based classification models by using combinations of the datasets with a 72% train, 8% validation, and 20% test data split.
The models trained on multiple datasets and evaluated on a test set from one of the datasets used for training performed better.
arXiv Detail & Related papers (2021-02-18T21:14:52Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.