Segment Anything Model for Medical Image Analysis: an Experimental Study
- URL: http://arxiv.org/abs/2304.10517v3
- Date: Wed, 17 May 2023 17:20:46 GMT
- Title: Segment Anything Model for Medical Image Analysis: an Experimental Study
- Authors: Maciej A. Mazurowski, Haoyu Dong, Hanxue Gu, Jichen Yang, Nicholas
Konz, Yixin Zhang
- Abstract summary: Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
- Score: 19.95972201734614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training segmentation models for medical images continues to be challenging
due to the limited availability of data annotations. Segment Anything Model
(SAM) is a foundation model that is intended to segment user-defined objects of
interest in an interactive manner. While the performance on natural images is
impressive, medical image domains pose their own set of challenges. Here, we
perform an extensive evaluation of SAM's ability to segment medical images on a
collection of 19 medical imaging datasets from various modalities and
anatomies. We report the following findings: (1) SAM's performance based on
single prompts highly varies depending on the dataset and the task, from
IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation
performance appears to be better for well-circumscribed objects with prompts
with less ambiguity and poorer in various other scenarios such as the
segmentation of brain tumors. (3) SAM performs notably better with box prompts
than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick,
and FocalClick in almost all single-point prompt settings. (5) When
multiple-point prompts are provided iteratively, SAM's performance generally
improves only slightly while other methods' performance improves to the level
that surpasses SAM's point-based performance. We also provide several
illustrations for SAM's performance on all tested datasets, iterative
segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM
shows impressive zero-shot segmentation performance for certain medical imaging
datasets, but moderate to poor performance for others. SAM has the potential to
make a significant impact in automated medical image segmentation in medical
imaging, but appropriate care needs to be applied when using it.
Related papers
- DB-SAM: Delving into High Quality Universal Medical Image Segmentation [100.63434169944853]
We propose a dual-branch adapted SAM framework, named DB-SAM, to bridge the gap between natural and 2D/3D medical data.
Our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature.
arXiv Detail & Related papers (2024-10-05T14:36:43Z) - SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images [40.4422523499489]
Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks.
We propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions.
We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images.
arXiv Detail & Related papers (2024-08-19T11:01:00Z) - Is SAM 2 Better than SAM in Medical Image Segmentation? [0.6144680854063939]
The Segment Anything Model (SAM) has demonstrated impressive performance in zero-shot promptable segmentation on natural images.
The recently released Segment Anything Model 2 (SAM 2) claims to outperform SAM on images and extends the model's capabilities to video segmentation.
We conducted extensive studies using multiple datasets to compare the performance of SAM and SAM 2.
arXiv Detail & Related papers (2024-08-08T04:34:29Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SAM-Med2D [34.82072231983896]
We introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images.
We first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets.
We fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D.
arXiv Detail & Related papers (2023-08-30T17:59:02Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z) - Zero-shot performance of the Segment Anything Model (SAM) in 2D medical
imaging: A comprehensive evaluation and practical guidelines [0.13854111346209866]
Segment Anything Model (SAM) harnesses a massive training dataset to segment nearly any object.
Our findings reveal that SAM's zero-shot performance is not only comparable, but in certain cases, surpasses the current state-of-the-art.
We propose practical guidelines that require minimal interaction while consistently yielding robust outcomes.
arXiv Detail & Related papers (2023-04-28T22:07:24Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - When SAM Meets Medical Images: An Investigation of Segment Anything
Model (SAM) on Multi-phase Liver Tumor Segmentation [4.154974672747996]
Segment Anything Model (SAM) performs the significant zero-shot image segmentation.
We investigate the capability of SAM for medical image analysis, especially for multi-phase liver tumor segmentation.
arXiv Detail & Related papers (2023-04-17T16:02:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.