Segment anything model for head and neck tumor segmentation with CT, PET
and MRI multi-modality images
- URL: http://arxiv.org/abs/2402.17454v1
- Date: Tue, 27 Feb 2024 12:26:45 GMT
- Title: Segment anything model for head and neck tumor segmentation with CT, PET
and MRI multi-modality images
- Authors: Jintao Ren, Mathis Rasmussen, Jasper Nijkamp, Jesper Grau Eriksen and
Stine Korreman
- Abstract summary: This study investigates the Segment Anything Model (SAM), recognized for requiring minimal human prompting.
We specifically examine MedSAM, a version of SAM fine-tuned with large-scale public medical images.
Our study demonstrates that fine-tuning SAM significantly enhances its segmentation accuracy, building upon the already effective zero-shot results.
- Score: 0.04924932828166548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning presents novel opportunities for the auto-segmentation of gross
tumor volume (GTV) in head and neck cancer (HNC), yet fully automatic methods
usually necessitate significant manual refinement. This study investigates the
Segment Anything Model (SAM), recognized for requiring minimal human prompting
and its zero-shot generalization ability across natural images. We specifically
examine MedSAM, a version of SAM fine-tuned with large-scale public medical
images. Despite its progress, the integration of multi-modality images (CT,
PET, MRI) for effective GTV delineation remains a challenge. Focusing on SAM's
application in HNC GTV segmentation, we assess its performance in both
zero-shot and fine-tuned scenarios using single (CT-only) and fused
multi-modality images. Our study demonstrates that fine-tuning SAM
significantly enhances its segmentation accuracy, building upon the already
effective zero-shot results achieved with bounding box prompts. These findings
open a promising avenue for semi-automatic HNC GTV segmentation.
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - Segment Anything Model for Brain Tumor Segmentation [3.675657219384998]
Glioma is a prevalent brain tumor that poses a significant health risk to individuals.
The Segment Anything Model, released by Meta AI, is a fundamental model in image segmentation and has excellent zero-sample generalization capabilities.
In this study, we evaluated the performance of SAM on brain tumor segmentation and found that without any model fine-tuning, there is still a gap between SAM and the current state-of-the-art(SOTA) model.
arXiv Detail & Related papers (2023-09-15T14:33:03Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Zero-shot performance of the Segment Anything Model (SAM) in 2D medical
imaging: A comprehensive evaluation and practical guidelines [0.13854111346209866]
Segment Anything Model (SAM) harnesses a massive training dataset to segment nearly any object.
Our findings reveal that SAM's zero-shot performance is not only comparable, but in certain cases, surpasses the current state-of-the-art.
We propose practical guidelines that require minimal interaction while consistently yielding robust outcomes.
arXiv Detail & Related papers (2023-04-28T22:07:24Z) - Generalist Vision Foundation Models for Medical Imaging: A Case Study of
Segment Anything Model on Zero-Shot Medical Segmentation [5.547422331445511]
We report quantitative and qualitative zero-shot segmentation results on nine medical image segmentation benchmarks.
Our study indicates the versatility of generalist vision foundation models on medical imaging.
arXiv Detail & Related papers (2023-04-25T08:07:59Z) - When SAM Meets Medical Images: An Investigation of Segment Anything
Model (SAM) on Multi-phase Liver Tumor Segmentation [4.154974672747996]
Segment Anything Model (SAM) performs the significant zero-shot image segmentation.
We investigate the capability of SAM for medical image analysis, especially for multi-phase liver tumor segmentation.
arXiv Detail & Related papers (2023-04-17T16:02:06Z) - SAM.MD: Zero-shot medical image segmentation capabilities of the Segment
Anything Model [1.1221592576472588]
We evaluate the zero-shot capabilities of the Segment Anything Model for medical image segmentation.
We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools.
arXiv Detail & Related papers (2023-04-10T18:20:29Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.