SAMDA: Leveraging SAM on Few-Shot Domain Adaptation for Electronic
Microscopy Segmentation
- URL: http://arxiv.org/abs/2403.07951v1
- Date: Tue, 12 Mar 2024 02:28:29 GMT
- Title: SAMDA: Leveraging SAM on Few-Shot Domain Adaptation for Electronic
Microscopy Segmentation
- Authors: Yiran Wang, Li Xiao
- Abstract summary: We present a new few-shot domain adaptation framework SAMDA.
It combines the Segment Anything Model(SAM) with nnUNet in the embedding space to achieve high transferability and accuracy.
- Score: 3.7562258027956186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It has been shown that traditional deep learning methods for electronic
microscopy segmentation usually suffer from low transferability when samples
and annotations are limited, while large-scale vision foundation models are
more robust when transferring between different domains but facing sub-optimal
improvement under fine-tuning. In this work, we present a new few-shot domain
adaptation framework SAMDA, which combines the Segment Anything Model(SAM) with
nnUNet in the embedding space to achieve high transferability and accuracy.
Specifically, we choose the Unet-based network as the "expert" component to
learn segmentation features efficiently and design a SAM-based adaptation
module as the "generic" component for domain transfer. By amalgamating the
"generic" and "expert" components, we mitigate the modality imbalance in the
complex pre-training knowledge inherent to large-scale Vision Foundation models
and the challenge of transferability inherent to traditional neural networks.
The effectiveness of our model is evaluated on two electron microscopic image
datasets with different modalities for mitochondria segmentation, which
improves the dice coefficient on the target domain by 6.7%. Also, the SAM-based
adaptor performs significantly better with only a single annotated image than
the 10-shot domain adaptation on nnUNet. We further verify our model on four
MRI datasets from different sources to prove its generalization ability.
Related papers
- Unleashing the Power of Generic Segmentation Models: A Simple Baseline for Infrared Small Target Detection [57.666055329221194]
We investigate the adaptation of generic segmentation models, such as the Segment Anything Model (SAM), to infrared small object detection tasks.
Our model demonstrates significantly improved performance in both accuracy and throughput compared to existing approaches.
arXiv Detail & Related papers (2024-09-07T05:31:24Z) - Multi-scale Contrastive Adaptor Learning for Segmenting Anything in Underperformed Scenes [12.36950265154199]
We introduce a novel Multi-scale Contrastive Adaptor learning method named MCA-SAM.
MCA-SAM enhances adaptor performance through a meticulously designed contrastive learning framework at both token and sample levels.
Empirical results demonstrate that MCA-SAM sets new benchmarks, outperforming existing methods in three challenging domains.
arXiv Detail & Related papers (2024-08-12T06:23:10Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance [12.169801149021566]
The Segment Anything Model (SAM) has emerged as a versatile tool for image segmentation without specific domain training.
Traditional models like nnUNet perform automatic segmentation during inference but need extensive domain-specific training.
We propose nnSAM, integrating SAM's robust feature extraction with nnUNet's automatic configuration to enhance segmentation accuracy on small datasets.
arXiv Detail & Related papers (2023-09-29T04:26:25Z) - Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few
Exemplars [19.725817146049707]
The Segment Anything Model (SAM) has demonstrated remarkable capabilities of scaled-up segmentation models.
However, the adoption of foundational models in the medical domain presents a challenge due to the difficulty and expense of labeling sufficient data.
This paper introduces an efficient and practical approach for fine-tuning SAM using a limited number of exemplars.
arXiv Detail & Related papers (2023-08-27T15:21:25Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Ladder Fine-tuning approach for SAM integrating complementary network [5.46706034286531]
In medical imaging, the lack of training samples due to privacy concerns and other factors presents a major challenge for applying these generalized models to medical image segmentation task.
In this study, we propose to combine a complementary Convolutional Neural Network (CNN) along with the standard SAM network for medical image segmentation.
This strategy significantly reduces trainnig time and achieves competitive results on publicly available dataset.
arXiv Detail & Related papers (2023-06-22T08:36:17Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.