$\mathrm{SAM^{Med}}$: A medical image annotation framework based on
large vision model
- URL: http://arxiv.org/abs/2307.05617v2
- Date: Mon, 18 Sep 2023 02:19:52 GMT
- Title: $\mathrm{SAM^{Med}}$: A medical image annotation framework based on
large vision model
- Authors: Chenglong Wang, Dexuan Li, Sucheng Wang, Chengxiu Zhang, Yida Wang,
Yun Liu, Guang Yang
- Abstract summary: Large vision model, Segment Anything Model (SAM), has revolutionized the computer vision field.
In this study, we present $mathrmSAMMed$, an enhanced framework for medical image annotation.
Results show a significant improvement in segmentation accuracy with only approximately 5 input points.
- Score: 23.095778923771732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, large vision model, Segment Anything Model (SAM), has
revolutionized the computer vision field, especially for image segmentation.
SAM presented a new promptable segmentation paradigm that exhibit its
remarkable zero-shot generalization ability. An extensive researches have
explore the potential and limits of SAM in various downstream tasks. In this
study, we presents $\mathrm{SAM^{Med}}$, an enhanced framework for medical
image annotation that leverages the capabilities of SAM. $\mathrm{SAM^{Med}}$
framework consisted of two submodules, namely $\mathrm{SAM^{assist}}$ and
$\mathrm{SAM^{auto}}$. The $\mathrm{SAM^{assist}}$ demonstrates the
generalization ability of SAM to the downstream medical segmentation task using
the prompt-learning approach. Results show a significant improvement in
segmentation accuracy with only approximately 5 input points. The
$\mathrm{SAM^{auto}}$ model aims to accelerate the annotation process by
automatically generating input prompts. The proposed SAP-Net model achieves
superior segmentation performance with only five annotated slices, achieving an
average Dice coefficient of 0.80 and 0.82 for kidney and liver segmentation,
respectively. Overall, $\mathrm{SAM^{Med}}$ demonstrates promising results in
medical image annotation. These findings highlight the potential of leveraging
large-scale vision models in medical image annotation tasks.
Related papers
- UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity [54.41309926099154]
We introduce UnSAMv2, which enables segment anything at any granularity without human annotations.<n>UnSAMv2 extends the divide-and-conquer strategy of UnSAM by discovering abundant mask-granularity pairs.<n>We show that small amounts of unlabeled data with a granularity-aware self-supervised learning method can unlock the potential of vision foundation models.
arXiv Detail & Related papers (2025-11-17T18:58:34Z) - VesSAM: Efficient Multi-Prompting for Segmenting Complex Vessel [68.24765319399286]
We present VesSAM, a powerful and efficient framework tailored for 2D vessel segmentation.<n>VesSAM integrates (1) a convolutional adapter to enhance local texture features, (2) a multi-prompt encoder that fuses anatomical prompts, and (3) a lightweight mask decoder to reduce jagged artifacts.<n>VesSAM consistently outperforms state-of-the-art PEFT-based SAM variants by over 10% Dice and 13% IoU.
arXiv Detail & Related papers (2025-11-02T15:47:05Z) - WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM with Sub-Class Exploration and Prompt Affinity Mining [31.81408955413914]
We investigate a weakly-supervised SAM-based segmentation model, namely WeakMedSAM, to reduce the labeling cost.
Specifically, our proposed WeakMedSAM contains two modules: 1) to mitigate severe co-occurrence in medical images, and 2) to improve the quality of the class activation maps.
Our method can be applied to any SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM.
arXiv Detail & Related papers (2025-03-06T05:28:44Z) - Learnable Prompting SAM-induced Knowledge Distillation for Semi-supervised Medical Image Segmentation [47.789013598970925]
We propose a learnable prompting SAM-induced Knowledge distillation framework (KnowSAM) for semi-supervised medical image segmentation.
Our model outperforms the state-of-the-art semi-supervised segmentation approaches.
arXiv Detail & Related papers (2024-12-18T11:19:23Z) - SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images [40.4422523499489]
Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks.
We propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions.
We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images.
arXiv Detail & Related papers (2024-08-19T11:01:00Z) - Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection [58.241593208031816]
Segment Anything Model (SAM) has been proposed as a visual fundamental model, which gives strong segmentation and generalization capabilities.
We propose a Multi-scale and Detail-enhanced SAM (MDSAM) for Salient Object Detection (SOD)
Experimental results demonstrate the superior performance of our model on multiple SOD datasets.
arXiv Detail & Related papers (2024-08-08T09:09:37Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - TinySAM: Pushing the Envelope for Efficient Segment Anything Model [76.21007576954035]
We propose a framework to obtain a tiny segment anything model (TinySAM) while maintaining the strong zero-shot performance.
We first propose a full-stage knowledge distillation method with hard prompt sampling and hard mask weighting strategy to distill a lightweight student model.
We also adapt the post-training quantization to the promptable segmentation task and further reduce the computational cost.
arXiv Detail & Related papers (2023-12-21T12:26:11Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z) - Customized Segment Anything Model for Medical Image Segmentation [10.933449793055313]
We build upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation.
SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets.
Our trained SAMed model achieves semantic segmentation on medical images, which is on par with the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-26T19:05:34Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Input Augmentation with SAM: Boosting Medical Image Segmentation with
Segmentation Foundation Model [36.015065439244495]
The Segment Anything Model (SAM) is a recently developed large model for general-purpose segmentation for computer vision tasks.
SAM was trained using 11 million images with over 1 billion masks and can produce segmentation results for a wide range of objects in natural scene images.
This paper shows that although SAM does not immediately give high-quality segmentation for medical image data, its generated masks, features, and stability scores are useful for building and training better medical image segmentation models.
arXiv Detail & Related papers (2023-04-22T07:11:53Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - SAM.MD: Zero-shot medical image segmentation capabilities of the Segment
Anything Model [1.1221592576472588]
We evaluate the zero-shot capabilities of the Segment Anything Model for medical image segmentation.
We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools.
arXiv Detail & Related papers (2023-04-10T18:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.