Input Augmentation with SAM: Boosting Medical Image Segmentation with
Segmentation Foundation Model
- URL: http://arxiv.org/abs/2304.11332v2
- Date: Wed, 21 Jun 2023 14:04:45 GMT
- Title: Input Augmentation with SAM: Boosting Medical Image Segmentation with
Segmentation Foundation Model
- Authors: Yizhe Zhang, Tao Zhou, Shuo Wang, Peixian Liang, Danny Z. Chen
- Abstract summary: The Segment Anything Model (SAM) is a recently developed large model for general-purpose segmentation for computer vision tasks.
SAM was trained using 11 million images with over 1 billion masks and can produce segmentation results for a wide range of objects in natural scene images.
This paper shows that although SAM does not immediately give high-quality segmentation for medical image data, its generated masks, features, and stability scores are useful for building and training better medical image segmentation models.
- Score: 36.015065439244495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Segment Anything Model (SAM) is a recently developed large model for
general-purpose segmentation for computer vision tasks. SAM was trained using
11 million images with over 1 billion masks and can produce segmentation
results for a wide range of objects in natural scene images. SAM can be viewed
as a general perception model for segmentation (partitioning images into
semantically meaningful regions). Thus, how to utilize such a large foundation
model for medical image segmentation is an emerging research target. This paper
shows that although SAM does not immediately give high-quality segmentation for
medical image data, its generated masks, features, and stability scores are
useful for building and training better medical image segmentation models. In
particular, we demonstrate how to use SAM to augment image input for
commonly-used medical image segmentation models (e.g., U-Net). Experiments on
three segmentation tasks show the effectiveness of our proposed SAMAug method.
The code is available at \url{https://github.com/yizhezhang2000/SAMAug}.
Related papers
- SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images [40.4422523499489]
Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks.
We propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions.
We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images.
arXiv Detail & Related papers (2024-08-19T11:01:00Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z) - Customized Segment Anything Model for Medical Image Segmentation [10.933449793055313]
We build upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation.
SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets.
Our trained SAMed model achieves semantic segmentation on medical images, which is on par with the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-26T19:05:34Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM [6.172995387355581]
We introduce Segment Any Medical Model (SAMM), an extension of SAM on 3D Slicer.
SAMM achieves 0.6-second latency of a complete cycle and can infer image masks in nearly real-time.
arXiv Detail & Related papers (2023-04-12T05:39:38Z) - Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot
Segmentation on Whole Slide Imaging [12.533476185972527]
The segment anything model (SAM) was released as a foundation model for image segmentation.
We evaluate the zero-shot segmentation performance of SAM model on representative segmentation tasks on whole slide imaging (WSI)
The results suggest that the zero-shot SAM model achieves remarkable segmentation performance for large connected objects.
arXiv Detail & Related papers (2023-04-09T04:06:59Z) - Segment Anything [108.16489338211093]
We build the largest segmentation dataset to date, with over 1 billion masks on 11M licensed and privacy respecting images.
The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks.
We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive.
arXiv Detail & Related papers (2023-04-05T17:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.