SAM-Med3D
- URL: http://arxiv.org/abs/2310.15161v2
- Date: Sun, 29 Oct 2023 15:45:38 GMT
- Title: SAM-Med3D
- Authors: Haoyu Wang, Sizheng Guo, Jin Ye, Zhongying Deng, Junlong Cheng,
Tianbin Li, Jianpin Chen, Yanzhou Su, Ziyan Huang, Yiqing Shen, Bin Fu,
Shaoting Zhang, Junjun He, Yu Qiao
- Abstract summary: We introduce SAM-Med3D, the most comprehensive study to modify SAM for 3D medical images.
We train SAM-Med3D with over 131K 3D masks and 247 categories.
Our approach, compared with SAM, showcases pronouncedly enhanced efficiency and broad segmentation capabilities for 3D volumetric medical images.
- Score: 36.6362248184995
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Although the Segment Anything Model (SAM) has demonstrated impressive
performance in 2D natural image segmentation, its application to 3D volumetric
medical images reveals significant shortcomings, namely suboptimal performance
and unstable prediction, necessitating an excessive number of prompt points to
attain the desired outcomes. These issues can hardly be addressed by
fine-tuning SAM on medical data because the original 2D structure of SAM
neglects 3D spatial information. In this paper, we introduce SAM-Med3D, the
most comprehensive study to modify SAM for 3D medical images. Our approach is
characterized by its comprehensiveness in two primary aspects: firstly, by
comprehensively reformulating SAM to a thorough 3D architecture trained on a
comprehensively processed large-scale volumetric medical dataset; and secondly,
by providing a comprehensive evaluation of its performance. Specifically, we
train SAM-Med3D with over 131K 3D masks and 247 categories. Our SAM-Med3D
excels at capturing 3D spatial information, exhibiting competitive performance
with significantly fewer prompt points than the top-performing fine-tuned SAM
in the medical domain. We then evaluate its capabilities across 15 datasets and
analyze it from multiple perspectives, including anatomical structures,
modalities, targets, and generalization abilities. Our approach, compared with
SAM, showcases pronouncedly enhanced efficiency and broad segmentation
capabilities for 3D volumetric medical images. Our code is released at
https://github.com/uni-medical/SAM-Med3D.
Related papers
- TAGS: 3D Tumor-Adaptive Guidance for SAM [4.073510647434655]
We propose an adaptation framework called TAGS: Tumor Adaptive Guidance for SAM.<n>It unlocks 2D FMs for 3D medical tasks through multi-prompt fusion.<n>Our model surpasses the state-of-the-art medical image segmentation models.
arXiv Detail & Related papers (2025-05-21T04:02:17Z) - Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical Image Segmentation [8.78725593323412]
Few-shot Adaptation of Training-frEe SAM (FATE-SAM) is a novel method designed to adapt the advanced Segment Anything Model 2 (SAM2) for 3D medical image segmentation.
FATE-SAM reassembles pre-trained modules of SAM2 to enable few-shot adaptation, leveraging a small number of support examples.
We evaluate FATE-SAM on multiple medical imaging datasets and compare it with supervised learning methods, zero-shot SAM approaches, and fine-tuned medical SAM methods.
arXiv Detail & Related papers (2025-01-15T20:44:21Z) - Interactive 3D Medical Image Segmentation with SAM 2 [17.523874868612577]
We explore the zero-shot capabilities of SAM 2, the next-generation Meta SAM model trained on videos, for 3D medical image segmentation.
By treating sequential 2D slices of 3D images as video frames, SAM 2 can fully automatically propagate annotations from a single frame to the entire 3D volume.
arXiv Detail & Related papers (2024-08-05T16:58:56Z) - SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation [36.95030121663565]
Supervised Finetuning (SFT) serves as an effective way to adapt foundation models for task-specific downstream tasks.
We propose SAM-Med3D-MoE, a novel framework that seamlessly integrates task-specific finetuned models with the foundational model.
Our experiments demonstrate the efficacy of SAM-Med3D-MoE, with an average Dice performance increase from 53 to 56.4 on 15 specific classes.
arXiv Detail & Related papers (2024-07-06T03:03:45Z) - M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models [49.5030774873328]
Previous research has primarily focused on 2D medical images, leaving 3D images under-explored, despite their richer spatial information.
We present a large-scale 3D multi-modal medical dataset, M3D-Data, comprising 120K image-text pairs and 662K instruction-response pairs.
We also introduce a new 3D multi-modal medical benchmark, M3D-Bench, which facilitates automatic evaluation across eight tasks.
arXiv Detail & Related papers (2024-03-31T06:55:12Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts [62.55349777609194]
We aim to build up a model that can Segment Anything in radiology scans, driven by Text prompts, termed as SAT.
We build up the largest and most comprehensive segmentation dataset for training, by collecting over 22K 3D medical image scans.
We have trained SAT-Nano (110M parameters) and SAT-Pro (447M parameters) demonstrating comparable performance to 72 specialist nnU-Nets trained on each dataset/subsets.
arXiv Detail & Related papers (2023-12-28T18:16:00Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SAM3D: Segment Anything Model in Volumetric Medical Images [11.764867415789901]
We introduce SAM3D, an innovative adaptation tailored for 3D volumetric medical image analysis.
Unlike current SAM-based methods that segment volumetric data by converting the volume into separate 2D slices for individual analysis, our SAM3D model processes the entire 3D volume image in a unified approach.
arXiv Detail & Related papers (2023-09-07T06:05:28Z) - MedLSAM: Localize and Segment Anything Model for 3D CT Images [13.320012515543116]
We introduce MedLAM, a 3D medical foundation localization model that accurately identifies any anatomical part within the body using only a few template scans.
We developed MedLSAM by integrating MedLAM with the Segment Anything Model (SAM)
Our findings are twofold: 1) MedLAM can directly localize anatomical structures using just a few template scans, achieving performance comparable to fully supervised models; 2) MedLSAM closely matches the performance of SAM and its specialized medical adaptations with manual prompts, while minimizing the need for extensive point annotations across the entire dataset.
arXiv Detail & Related papers (2023-06-26T15:09:02Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.