MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation
- URL: http://arxiv.org/abs/2309.08842v1
- Date: Sat, 16 Sep 2023 02:41:53 GMT
- Title: MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation
- Authors: Cheng Chen, Juzheng Miao, Dufan Wu, Zhiling Yan, Sekeun Kim, Jiang Hu,
Aoxiao Zhong, Zhengliang Liu, Lichao Sun, Xiang Li, Tianming Liu, Pheng-Ann
Heng, Quanzheng Li
- Abstract summary: We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
- Score: 58.53672866662472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Segment Anything Model (SAM), a foundation model for general image
segmentation, has demonstrated impressive zero-shot performance across numerous
natural image segmentation tasks. However, SAM's performance significantly
declines when applied to medical images, primarily due to the substantial
disparity between natural and medical image domains. To effectively adapt SAM
to medical images, it is important to incorporate critical third-dimensional
information, i.e., volumetric or temporal knowledge, during fine-tuning.
Simultaneously, we aim to harness SAM's pre-trained weights within its original
2D backbone to the fullest extent. In this paper, we introduce a
modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable
to various volumetric and video medical data. Our method roots in the
parameter-efficient fine-tuning strategy to update only a small portion of
weight increments while preserving the majority of SAM's pre-trained weights.
By injecting a series of 3D adapters into the transformer blocks of the image
encoder, our method enables the pre-trained 2D backbone to extract
third-dimensional information from input data. The effectiveness of our method
has been comprehensively evaluated on four medical image segmentation tasks, by
using 10 public datasets across CT, MRI, and surgical video data. Remarkably,
without using any prompt, our method consistently outperforms various
state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in
Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical
scene segmentation respectively. Our model also demonstrates strong
generalization, and excels in challenging tumor segmentation when prompts are
used. Our code is available at: https://github.com/cchen-cc/MA-SAM.
Related papers
- DB-SAM: Delving into High Quality Universal Medical Image Segmentation [100.63434169944853]
We propose a dual-branch adapted SAM framework, named DB-SAM, to bridge the gap between natural and 2D/3D medical data.
Our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature.
arXiv Detail & Related papers (2024-10-05T14:36:43Z) - SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images [40.4422523499489]
Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks.
We propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions.
We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images.
arXiv Detail & Related papers (2024-08-19T11:01:00Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - SAM-Med3D: Towards General-purpose Segmentation Models for Volumetric Medical Images [35.83393121891959]
We introduce SAM-Med3D for general-purpose segmentation on volumetric medical images.
SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities.
Our approach demonstrates that substantial medical resources can be utilized to develop a general-purpose medical AI.
arXiv Detail & Related papers (2023-10-23T17:57:36Z) - SAM3D: Segment Anything Model in Volumetric Medical Images [11.764867415789901]
We introduce SAM3D, an innovative adaptation tailored for 3D volumetric medical image analysis.
Unlike current SAM-based methods that segment volumetric data by converting the volume into separate 2D slices for individual analysis, our SAM3D model processes the entire 3D volume image in a unified approach.
arXiv Detail & Related papers (2023-09-07T06:05:28Z) - AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene
Segmentation [49.59991322513561]
We propose an adaptive modification of Segment-Anything (SAM) that can adjust to new datasets quickly and efficiently.
AdaptiveSAM uses free-form text as prompt and can segment the object of interest with just the label name as prompt.
Our experiments show that AdaptiveSAM outperforms current state-of-the-art methods on various medical imaging datasets.
arXiv Detail & Related papers (2023-08-07T17:12:54Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.