Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation
- URL: http://arxiv.org/abs/2304.12620v7
- Date: Fri, 29 Dec 2023 03:40:59 GMT
- Title: Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation
- Authors: Junde Wu and Wei Ji and Yuanpei Liu and Huazhu Fu and Min Xu and Yanwu
Xu and Yueming Jin
- Abstract summary: The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
- Score: 51.770805270588625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Segment Anything Model (SAM) has recently gained popularity in the field
of image segmentation due to its impressive capabilities in various
segmentation tasks and its prompt-based interface. However, recent studies and
individual experiments have shown that SAM underperforms in medical image
segmentation, since the lack of the medical specific knowledge. This raises the
question of how to enhance SAM's segmentation capability for medical images. In
this paper, instead of fine-tuning the SAM model, we propose the Medical SAM
Adapter (Med-SA), which incorporates domain-specific medical knowledge into the
segmentation model using a light yet effective adaptation technique. In Med-SA,
we propose Space-Depth Transpose (SD-Trans) to adapt 2D SAM to 3D medical
images and Hyper-Prompting Adapter (HyP-Adpt) to achieve prompt-conditioned
adaptation. We conduct comprehensive evaluation experiments on 17 medical image
segmentation tasks across various image modalities. Med-SA outperforms several
state-of-the-art (SOTA) medical image segmentation methods, while updating only
2\% of the parameters. Our code is released at
https://github.com/KidsWithTokens/Medical-SAM-Adapter.
Related papers
- DB-SAM: Delving into High Quality Universal Medical Image Segmentation [100.63434169944853]
We propose a dual-branch adapted SAM framework, named DB-SAM, to bridge the gap between natural and 2D/3D medical data.
Our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature.
arXiv Detail & Related papers (2024-10-05T14:36:43Z) - FS-MedSAM2: Exploring the Potential of SAM2 for Few-Shot Medical Image Segmentation without Fine-tuning [6.208470070755133]
We introduce FS-MedSAM2, a framework that enables SAM2 to achieve superior medical image segmentation in a few-shot setting.
Our framework outperforms the current state-of-the-arts on two publicly available medical image datasets.
arXiv Detail & Related papers (2024-09-06T14:17:09Z) - Medical SAM 2: Segment medical images as video via Segment Anything Model 2 [4.911843298581903]
We introduce Medical SAM 2 (MedSAM-2), an advanced segmentation model that addresses both 2D and 3D medical image segmentation tasks.
By adopting the philosophy of taking medical images as videos, MedSAM-2 not only applies to 3D medical images but also unlocks new One-prompt capability.
Our findings show that MedSAM-2 not only surpasses existing models in performance but also exhibits superior generalization across a range of medical image segmentation tasks.
arXiv Detail & Related papers (2024-08-01T18:49:45Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SAM-Med2D [34.82072231983896]
We introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images.
We first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets.
We fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D.
arXiv Detail & Related papers (2023-08-30T17:59:02Z) - AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene
Segmentation [49.59991322513561]
We propose an adaptive modification of Segment-Anything (SAM) that can adjust to new datasets quickly and efficiently.
AdaptiveSAM uses free-form text as prompt and can segment the object of interest with just the label name as prompt.
Our experiments show that AdaptiveSAM outperforms current state-of-the-art methods on various medical imaging datasets.
arXiv Detail & Related papers (2023-08-07T17:12:54Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z) - Customized Segment Anything Model for Medical Image Segmentation [10.933449793055313]
We build upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation.
SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets.
Our trained SAMed model achieves semantic segmentation on medical images, which is on par with the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-26T19:05:34Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.