SAM-Med2D
- URL: http://arxiv.org/abs/2308.16184v1
- Date: Wed, 30 Aug 2023 17:59:02 GMT
- Title: SAM-Med2D
- Authors: Junlong Cheng, Jin Ye, Zhongying Deng, Jianpin Chen, Tianbin Li, Haoyu
Wang, Yanzhou Su, Ziyan Huang, Jilong Chen, Lei Jiang, Hui Sun, Junjun He,
Shaoting Zhang, Min Zhu, Yu Qiao,
- Abstract summary: We introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images.
We first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets.
We fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D.
- Score: 34.82072231983896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Segment Anything Model (SAM) represents a state-of-the-art research
advancement in natural image segmentation, achieving impressive results with
input prompts such as points and bounding boxes. However, our evaluation and
recent research indicate that directly applying the pretrained SAM to medical
image segmentation does not yield satisfactory performance. This limitation
primarily arises from significant domain gap between natural images and medical
images. To bridge this gap, we introduce SAM-Med2D, the most comprehensive
studies on applying SAM to medical 2D images. Specifically, we first collect
and curate approximately 4.6M images and 19.7M masks from public and private
datasets, constructing a large-scale medical image segmentation dataset
encompassing various modalities and objects. Then, we comprehensively fine-tune
SAM on this dataset and turn it into SAM-Med2D. Unlike previous methods that
only adopt bounding box or point prompts as interactive segmentation approach,
we adapt SAM to medical image segmentation through more comprehensive prompts
involving bounding boxes, points, and masks. We additionally fine-tune the
encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D,
leading to the most comprehensive fine-tuning strategies to date. Finally, we
conducted a comprehensive evaluation and analysis to investigate the
performance of SAM-Med2D in medical image segmentation across various
modalities, anatomical structures, and organs. Concurrently, we validated the
generalization capability of SAM-Med2D on 9 datasets from MICCAI 2023
challenge. Overall, our approach demonstrated significantly superior
performance and generalization capability compared to SAM.
Related papers
- DB-SAM: Delving into High Quality Universal Medical Image Segmentation [100.63434169944853]
We propose a dual-branch adapted SAM framework, named DB-SAM, to bridge the gap between natural and 2D/3D medical data.
Our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature.
arXiv Detail & Related papers (2024-10-05T14:36:43Z) - Unleashing the Potential of SAM2 for Biomedical Images and Videos: A Survey [8.216028136706948]
Segment Anything Model (SAM) signifies a noteworthy expansion of the prompt-driven paradigm into the domain of image segmentation.
Recent introduction of SAM2 effectively extends the original SAM to a streaming fashion and demonstrates strong performance in video segmentation.
This paper presents an overview of recent efforts in applying and adapting SAM2 to biomedical images and videos.
arXiv Detail & Related papers (2024-08-23T07:51:10Z) - SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images [40.4422523499489]
Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks.
We propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions.
We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images.
arXiv Detail & Related papers (2024-08-19T11:01:00Z) - Is SAM 2 Better than SAM in Medical Image Segmentation? [0.6144680854063939]
The Segment Anything Model (SAM) has demonstrated impressive performance in zero-shot promptable segmentation on natural images.
The recently released Segment Anything Model 2 (SAM 2) claims to outperform SAM on images and extends the model's capabilities to video segmentation.
We conducted extensive studies using multiple datasets to compare the performance of SAM and SAM 2.
arXiv Detail & Related papers (2024-08-08T04:34:29Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z) - Customized Segment Anything Model for Medical Image Segmentation [10.933449793055313]
We build upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation.
SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets.
Our trained SAMed model achieves semantic segmentation on medical images, which is on par with the state-of-the-art methods.
arXiv Detail & Related papers (2023-04-26T19:05:34Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.