Segment Anything Model (SAM) for Radiation Oncology
- URL: http://arxiv.org/abs/2306.11730v2
- Date: Tue, 4 Jul 2023 18:56:15 GMT
- Title: Segment Anything Model (SAM) for Radiation Oncology
- Authors: Lian Zhang, Zhengliang Liu, Lu Zhang, Zihao Wu, Xiaowei Yu, Jason
Holmes, Hongying Feng, Haixing Dai, Xiang Li, Quanzheng Li, Dajiang Zhu,
Tianming Liu, Wei Liu
- Abstract summary: We evaluate the performance of the Segment Anything Model (SAM) in clinical radiotherapy.
Our results indicate that SAM's'segment anything' mode can achieve clinically acceptable segmentation results in most organs-at-risk (OARs)
Considering the size of the organ and the clarity of its boundary, SAM displays better performance for large organs with clear boundaries but performs worse for smaller organs with unclear boundaries.
- Score: 13.556384944315958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we evaluate the performance of the Segment Anything Model
(SAM) in clinical radiotherapy. Our results indicate that SAM's 'segment
anything' mode can achieve clinically acceptable segmentation results in most
organs-at-risk (OARs) with Dice scores higher than 0.7. SAM's 'box prompt' mode
further improves the Dice scores by 0.1 to 0.5. Considering the size of the
organ and the clarity of its boundary, SAM displays better performance for
large organs with clear boundaries but performs worse for smaller organs with
unclear boundaries. Given that SAM, a model pre-trained purely on natural
images, can handle the delineation of OARs from medical images with clinically
acceptable accuracy, these results highlight SAM's robust generalization
capabilities with consistent accuracy in automatic segmentation for
radiotherapy. In other words, SAM can achieve delineation of different OARs at
different sites using a generic automatic segmentation model. SAM's
generalization capabilities across different disease sites suggest that it is
technically feasible to develop a generic model for automatic segmentation in
radiotherapy.
Related papers
- SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images [40.4422523499489]
Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks.
We propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions.
We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images.
arXiv Detail & Related papers (2024-08-19T11:01:00Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Comprehensive Multimodal Segmentation in Medical Imaging: Combining
YOLOv8 with SAM and HQ-SAM Models [0.24578723416255752]
The proposed method harnesses the capabilities of the YOLOv8 model for approximate boundary box detection across modalities.
To generate boundary boxes, the YOLOv8 model was trained using a limited set of 100 images and masks from each modality.
A comparative analysis was conducted to assess the individual and combined performance of the YOLOv8, YOLOv8+SAM, and YOLOv8+HQ-SAM models.
arXiv Detail & Related papers (2023-10-04T20:30:49Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few
Exemplars [19.725817146049707]
The Segment Anything Model (SAM) has demonstrated remarkable capabilities of scaled-up segmentation models.
However, the adoption of foundational models in the medical domain presents a challenge due to the difficulty and expense of labeling sufficient data.
This paper introduces an efficient and practical approach for fine-tuning SAM using a limited number of exemplars.
arXiv Detail & Related papers (2023-08-27T15:21:25Z) - SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation [65.52097667738884]
We introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to integrate surgical-specific information with SAM's pre-trained knowledge for improved generalisation.
Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes.
In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning.
arXiv Detail & Related papers (2023-08-17T02:51:01Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - SAM.MD: Zero-shot medical image segmentation capabilities of the Segment
Anything Model [1.1221592576472588]
We evaluate the zero-shot capabilities of the Segment Anything Model for medical image segmentation.
We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools.
arXiv Detail & Related papers (2023-04-10T18:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.