AutoProSAM: Automated Prompting SAM for 3D Multi-Organ Segmentation
- URL: http://arxiv.org/abs/2308.14936v4
- Date: Sat, 23 Nov 2024 14:48:51 GMT
- Title: AutoProSAM: Automated Prompting SAM for 3D Multi-Organ Segmentation
- Authors: Chengyin Li, Prashant Khanduri, Yao Qiang, Rafi Ibn Sultan, Indrin Chetty, Dongxiao Zhu,
- Abstract summary: Segment Anything Model (SAM) is one of the pioneering prompt-based foundation models for image segmentation.
Recent studies have indicated that SAM, originally designed for 2D natural images, performs suboptimally on 3D medical image segmentation tasks.
We present a novel technique termed AutoProSAM to overcome these challenges.
- Score: 11.149807995830255
- License:
- Abstract: Segment Anything Model (SAM) is one of the pioneering prompt-based foundation models for image segmentation and has been rapidly adopted for various medical imaging applications. However, in clinical settings, creating effective prompts is notably challenging and time-consuming, requiring the expertise of domain specialists such as physicians. This requirement significantly diminishes SAM's primary advantage, its interactive capability with end users, in medical applications. Moreover, recent studies have indicated that SAM, originally designed for 2D natural images, performs suboptimally on 3D medical image segmentation tasks. This subpar performance is attributed to the domain gaps between natural and medical images and the disparities in spatial arrangements between 2D and 3D images, particularly in multi-organ segmentation applications. To overcome these challenges, we present a novel technique termed AutoProSAM. This method automates 3D multi-organ CT-based segmentation by leveraging SAM's foundational model capabilities without relying on domain experts for prompts. The approach utilizes parameter-efficient adaptation techniques to adapt SAM for 3D medical imagery and incorporates an effective automatic prompt learning paradigm specific to this domain. By eliminating the need for manual prompts, it enhances SAM's capabilities for 3D medical image segmentation and achieves state-of-the-art (SOTA) performance in CT-based multi-organ segmentation tasks. The code is in this {\href{https://github.com/ChengyinLee/AutoProSAM_2024}{link}}.
Related papers
- Self-Prompt SAM: Medical Image Segmentation via Automatic Prompt SAM Adaptation [14.821036063099458]
Segment Anything Model (SAM) has demonstrated impressive zero-shot performance.
We propose a novel self-prompt SAM adaptation framework for medical image segmentation, named Self-Prompt-SAM.
Our method achieves state-of-the-art performance and outperforms nnUNet by 2.3% on AMOS2022, 1.6% on ACDCand 0.5% on Synapse datasets.
arXiv Detail & Related papers (2025-02-02T02:42:24Z) - Few-Shot Adaptation of Training-Free Foundation Model for 3D Medical Image Segmentation [8.78725593323412]
Few-shot Adaptation of Training-frEe SAM (FATE-SAM) is a novel method designed to adapt the advanced Segment Anything Model 2 (SAM2) for 3D medical image segmentation.
FATE-SAM reassembles pre-trained modules of SAM2 to enable few-shot adaptation, leveraging a small number of support examples.
We evaluate FATE-SAM on multiple medical imaging datasets and compare it with supervised learning methods, zero-shot SAM approaches, and fine-tuned medical SAM methods.
arXiv Detail & Related papers (2025-01-15T20:44:21Z) - RefSAM3D: Adapting SAM with Cross-modal Reference for 3D Medical Image Segmentation [17.69664156349825]
The Segment Anything Model (SAM) excels at capturing global patterns in 2D natural images but struggles with 3D medical imaging modalities like CT and MRI.
We introduce RefSAM3D, which adapts SAM for 3D medical imaging by incorporating a 3D image adapter and cross-modal reference prompt generation.
Our contributions advance the application of SAM in accurately segmenting complex anatomical structures in medical imaging.
arXiv Detail & Related papers (2024-12-07T10:22:46Z) - DB-SAM: Delving into High Quality Universal Medical Image Segmentation [100.63434169944853]
We propose a dual-branch adapted SAM framework, named DB-SAM, to bridge the gap between natural and 2D/3D medical data.
Our proposed DB-SAM achieves an absolute gain of 8.8%, compared to a recent medical SAM adapter in the literature.
arXiv Detail & Related papers (2024-10-05T14:36:43Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Multi-Prompt Fine-Tuning of Foundation Models for Enhanced Medical Image
Segmentation [10.946806607643689]
The Segment Anything Model (SAM) is a powerful foundation model that introduced revolutionary advancements in natural image segmentation.
In this study, we introduce a novel fine-tuning framework that leverages SAM's ability to bundle and process multiple prompts per image.
arXiv Detail & Related papers (2023-10-03T19:05:00Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SAM3D: Segment Anything Model in Volumetric Medical Images [11.764867415789901]
We introduce SAM3D, an innovative adaptation tailored for 3D volumetric medical image analysis.
Unlike current SAM-based methods that segment volumetric data by converting the volume into separate 2D slices for individual analysis, our SAM3D model processes the entire 3D volume image in a unified approach.
arXiv Detail & Related papers (2023-09-07T06:05:28Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - TomoSAM: a 3D Slicer extension using SAM for tomography segmentation [62.997667081978825]
TomoSAM has been developed to integrate the cutting-edge Segment Anything Model (SAM) into 3D Slicer.
SAM is a promptable deep learning model that is able to identify objects and create image masks in a zero-shot manner.
The synergy between these tools aids in the segmentation of complex 3D datasets from tomography or other imaging techniques.
arXiv Detail & Related papers (2023-06-14T16:13:27Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.