A novel open-source ultrasound dataset with deep learning benchmarks for
spinal cord injury localization and anatomical segmentation
- URL: http://arxiv.org/abs/2409.16441v1
- Date: Tue, 24 Sep 2024 20:22:59 GMT
- Title: A novel open-source ultrasound dataset with deep learning benchmarks for
spinal cord injury localization and anatomical segmentation
- Authors: Avisha Kumar, Kunal Kotkar, Kelly Jiang, Meghana Bhimreddy, Daniel
Davidar, Carly Weber-Levine, Siddharth Krishnan, Max J. Kerensky, Ruixing
Liang, Kelley Kempski Leadingham, Denis Routkevitch, Andrew M. Hersh,
Kimberly Ashayeri, Betty Tyler, Ian Suk, Jennifer Son, Nicholas Theodore,
Nitish Thakor, and Amir Manbachi
- Abstract summary: We present an ultrasound dataset of 10,223-mode (B-mode) images consisting of sagittal slices of porcine spinal cords.
We benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury.
We evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images.
- Score: 1.02101998415327
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While deep learning has catalyzed breakthroughs across numerous domains, its
broader adoption in clinical settings is inhibited by the costly and
time-intensive nature of data acquisition and annotation. To further facilitate
medical machine learning, we present an ultrasound dataset of 10,223
Brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal
cords (N=25) before and after a contusion injury. We additionally benchmark the
performance metrics of several state-of-the-art object detection algorithms to
localize the site of injury and semantic segmentation models to label the
anatomy for comparison and creation of task-specific architectures. Finally, we
evaluate the zero-shot generalization capabilities of the segmentation models
on human ultrasound spinal cord images to determine whether training on our
porcine dataset is sufficient for accurately interpreting human data. Our
results show that the YOLOv8 detection model outperforms all evaluated models
for injury localization, achieving a mean Average Precision (mAP50-95) score of
0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model
achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score
of 0.587, while SAMed achieves the highest Mean Dice score generalizing to
human anatomy (0.445). To the best of our knowledge, this is the largest
annotated dataset of spinal cord ultrasound images made publicly available to
researchers and medical professionals, as well as the first public report of
object detection and segmentation architectures to assess anatomical markers in
the spinal cord for methodology development and clinical applications.
Related papers
- ISLES 2024: The first longitudinal multimodal multi-center real-world dataset in (sub-)acute stroke [2.7919032539697444]
Stroke remains a leading cause of global morbidity and mortality, placing a heavy socioeconomic burden.
To develop machine learning algorithms that can extract meaningful and reproducible models of brain function from stroke images.
Our dataset is the first to offer comprehensive longitudinal stroke data, including acute CT imaging with angiography and perfusion, follow-up MRI at 2-9 days, and acute and longitudinal clinical data up to a three-month outcome.
arXiv Detail & Related papers (2024-08-20T18:59:52Z) - Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Synthetic Data for Robust Stroke Segmentation [0.0]
Deep learning-based semantic segmentation in neuroimaging currently requires high-resolution scans and extensive annotated datasets.
We present a novel synthetic framework for the task of lesion segmentation, extending the capabilities of the established SynthSeg approach.
arXiv Detail & Related papers (2024-04-02T13:42:29Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - FAST-AID Brain: Fast and Accurate Segmentation Tool using Artificial
Intelligence Developed for Brain [0.8376091455761259]
A novel deep learning method is proposed for fast and accurate segmentation of the human brain into 132 regions.
The proposed model uses an efficient U-Net-like network and benefits from the intersection points of different views and hierarchical relations.
The proposed method can be applied to brain MRI data including skull or any other artifacts without preprocessing the images or a drop in performance.
arXiv Detail & Related papers (2022-08-30T16:06:07Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - White Matter Tracts are Point Clouds: Neuropsychological Score
Prediction and Critical Region Localization via Geometric Deep Learning [68.5548609642999]
We propose a deep-learning-based framework for neuropsychological score prediction using white matter tract data.
We represent the arcuate fasciculus (AF) as a point cloud with microstructure measurements at each point.
We improve prediction performance with the proposed Paired-Siamese Loss that utilizes information about differences between continuous neuropsychological scores.
arXiv Detail & Related papers (2022-07-06T02:03:28Z) - Progressive Adversarial Semantic Segmentation [11.323677925193438]
Deep convolutional neural networks can perform exceedingly well given full supervision.
The success of such fully-supervised models for various image analysis tasks is limited to the availability of massive amounts of labeled data.
We propose a novel end-to-end medical image segmentation model, namely Progressive Adrial Semantic (PASS)
arXiv Detail & Related papers (2020-05-08T22:48:00Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.