SAM-DA: Decoder Adapter for Efficient Medical Domain Adaptation
- URL: http://arxiv.org/abs/2501.06836v1
- Date: Sun, 12 Jan 2025 15:08:29 GMT
- Title: SAM-DA: Decoder Adapter for Efficient Medical Domain Adaptation
- Authors: Javier Gamazo Tejero, Moritz Schmid, Pablo Márquez Neila, Martin S. Zinkernagel, Sebastian Wolf, Raphael Sznitman,
- Abstract summary: This paper addresses the domain adaptation challenge for semantic segmentation in medical imaging.
Recent approaches that perform end-to-end fine-tuning of models are simply not computationally tractable.
We propose a novel SAM adapter approach that minimizes the number of trainable parameters while achieving comparable performances to full fine-tuning.
- Score: 3.5534229601986294
- License:
- Abstract: This paper addresses the domain adaptation challenge for semantic segmentation in medical imaging. Despite the impressive performance of recent foundational segmentation models like SAM on natural images, they struggle with medical domain images. Beyond this, recent approaches that perform end-to-end fine-tuning of models are simply not computationally tractable. To address this, we propose a novel SAM adapter approach that minimizes the number of trainable parameters while achieving comparable performances to full fine-tuning. The proposed SAM adapter is strategically placed in the mask decoder, offering excellent and broad generalization capabilities and improved segmentation across both fully supervised and test-time domain adaptation tasks. Extensive validation on four datasets showcases the adapter's efficacy, outperforming existing methods while training less than 1% of SAM's total parameters.
Related papers
- Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning [63.55145330447408]
We propose a novel textbfSelf-textbfPerceptinon textbfTuning (textbfSPT) method for anomaly segmentation.
The SPT method incorporates a self-drafting tuning strategy, which generates an initial coarse draft of the anomaly mask, followed by a refinement process.
arXiv Detail & Related papers (2024-11-26T08:33:25Z) - S-SAM: SVD-based Fine-Tuning of Segment Anything Model for Medical Image Segmentation [25.12190845061075]
We propose an adaptation technique, called S-SAM, that only trains parameters equal to 0.4% of SAM's parameters and at the same time uses simply the label names as prompts for producing precise masks.
We call this modified version S-SAM and evaluate it on five different modalities including endoscopic images, x-ray, ultrasound, CT, and histology images.
arXiv Detail & Related papers (2024-08-12T18:53:03Z) - Multi-scale Contrastive Adaptor Learning for Segmenting Anything in Underperformed Scenes [12.36950265154199]
We introduce a novel Multi-scale Contrastive Adaptor learning method named MCA-SAM.
MCA-SAM enhances adaptor performance through a meticulously designed contrastive learning framework at both token and sample levels.
Empirical results demonstrate that MCA-SAM sets new benchmarks, outperforming existing methods in three challenging domains.
arXiv Detail & Related papers (2024-08-12T06:23:10Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Unleashing the Potential of SAM for Medical Adaptation via Hierarchical Decoding [15.401507589312702]
This paper introduces H-SAM, a prompt-free adaptation of the Segment Anything Model (SAM) for efficient fine-tuning of medical images.
In the initial stage, H-SAM employs SAM's original decoder to generate a prior probabilistic mask, guiding a more intricate decoding process.
Our H-SAM demonstrates a 4.78% improvement in average Dice compared to existing prompt-free SAM variants.
arXiv Detail & Related papers (2024-03-27T05:55:16Z) - SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage
Segmentation [18.867207134086193]
Intracranial hemorrhage segmentation is a crucial and challenging step in stroke diagnosis and surgical planning.
We propose a SAM-based parameter-efficient fine-tuning method, called SAMIHS, for intracranial hemorrhage segmentation.
Our experimental results on two public datasets demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-11-14T14:23:09Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene
Segmentation [49.59991322513561]
We propose an adaptive modification of Segment-Anything (SAM) that can adjust to new datasets quickly and efficiently.
AdaptiveSAM uses free-form text as prompt and can segment the object of interest with just the label name as prompt.
Our experiments show that AdaptiveSAM outperforms current state-of-the-art methods on various medical imaging datasets.
arXiv Detail & Related papers (2023-08-07T17:12:54Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.