SAM2-SGP: Enhancing SAM2 for Medical Image Segmentation via Support-Set Guided Prompting
- URL: http://arxiv.org/abs/2506.19658v1
- Date: Tue, 24 Jun 2025 14:22:25 GMT
- Title: SAM2-SGP: Enhancing SAM2 for Medical Image Segmentation via Support-Set Guided Prompting
- Authors: Yang Xing, Jiong Wu, Yuheng Bu, Kuang Gong,
- Abstract summary: reliance on human-provided prompts poses significant challenges in adapting SAM2 to medical image segmentation tasks.<n>We proposed SAM2 with support-set guided prompting (SAM2-SGP), a framework that eliminated the need for manual prompts.<n>The proposed model leveraged the memory mechanism of SAM2 to generate pseudo-masks using image-mask pairs from a support set via a Pseudo-mask Generation (PMG) module.
- Score: 11.439997127887324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although new vision foundation models such as Segment Anything Model 2 (SAM2) have significantly enhanced zero-shot image segmentation capabilities, reliance on human-provided prompts poses significant challenges in adapting SAM2 to medical image segmentation tasks. Moreover, SAM2's performance in medical image segmentation was limited by the domain shift issue, since it was originally trained on natural images and videos. To address these challenges, we proposed SAM2 with support-set guided prompting (SAM2-SGP), a framework that eliminated the need for manual prompts. The proposed model leveraged the memory mechanism of SAM2 to generate pseudo-masks using image-mask pairs from a support set via a Pseudo-mask Generation (PMG) module. We further introduced a novel Pseudo-mask Attention (PMA) module, which used these pseudo-masks to automatically generate bounding boxes and enhance localized feature extraction by guiding attention to relevant areas. Furthermore, a low-rank adaptation (LoRA) strategy was adopted to mitigate the domain shift issue. The proposed framework was evaluated on both 2D and 3D datasets across multiple medical imaging modalities, including fundus photography, X-ray, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound. The results demonstrated a significant performance improvement over state-of-the-art models, such as nnUNet and SwinUNet, as well as foundation models, such as SAM2 and MedSAM2, underscoring the effectiveness of the proposed approach. Our code is publicly available at https://github.com/astlian9/SAM_Support.
Related papers
- Accelerating Volumetric Medical Image Annotation via Short-Long Memory SAM 2 [10.279314732888079]
Short-Long Memory SAM 2 (SLM-SAM 2) is a novel architecture that integrates distinct short-term and long-term memory banks to improve segmentation accuracy.<n>We evaluate SLM-SAM 2 on three public datasets covering organs, bones, and muscles across MRI and CT modalities.
arXiv Detail & Related papers (2025-05-03T16:16:24Z) - SAM-I2I: Unleash the Power of Segment Anything Model for Medical Image Translation [0.9626666671366836]
We propose SAM-I2I, a novel image-to-image translation framework based on the Segment Anything Model 2 (SAM2).
Our experiments on multi-contrast MRI datasets demonstrate that SAM-I2I outperforms state-of-the-art methods, offering more efficient and accurate medical image translation.
arXiv Detail & Related papers (2024-11-13T03:30:10Z) - DB-SAM: Delving into High Quality Universal Medical Image Segmentation [100.63434169944853]
We propose a dual-branch adapted SAM framework, named DB-SAM, to bridge the gap between natural and 2D/3D medical data.
Our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature.
arXiv Detail & Related papers (2024-10-05T14:36:43Z) - SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation [51.90445260276897]
We prove that the Segment Anything Model 2 (SAM2) can be a strong encoder for U-shaped segmentation models.
We propose a simple but effective framework, termed SAM2-UNet, for versatile image segmentation.
arXiv Detail & Related papers (2024-08-16T17:55:38Z) - Path-SAM2: Transfer SAM2 for digital pathology semantic segmentation [6.721564277355789]
We propose Path-SAM2, which for the first time adapts the SAM2 model to cater to the task of pathological semantic segmentation.
We integrate the largest pretrained vision encoder for histopathology (UNI) with the original SAM2 encoder, adding more pathology-based prior knowledge.
In three adenoma pathological datasets, Path-SAM2 has achieved state-of-the-art performance.
arXiv Detail & Related papers (2024-08-07T09:30:51Z) - Unleashing the Potential of SAM for Medical Adaptation via Hierarchical Decoding [15.401507589312702]
This paper introduces H-SAM, a prompt-free adaptation of the Segment Anything Model (SAM) for efficient fine-tuning of medical images.
In the initial stage, H-SAM employs SAM's original decoder to generate a prior probabilistic mask, guiding a more intricate decoding process.
Our H-SAM demonstrates a 4.78% improvement in average Dice compared to existing prompt-free SAM variants.
arXiv Detail & Related papers (2024-03-27T05:55:16Z) - Multi-Prompt Fine-Tuning of Foundation Models for Enhanced Medical Image
Segmentation [10.946806607643689]
The Segment Anything Model (SAM) is a powerful foundation model that introduced revolutionary advancements in natural image segmentation.
In this study, we introduce a novel fine-tuning framework that leverages SAM's ability to bundle and process multiple prompts per image.
arXiv Detail & Related papers (2023-10-03T19:05:00Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.