UN-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images
- URL: http://arxiv.org/abs/2402.16663v1
- Date: Mon, 26 Feb 2024 15:35:18 GMT
- Title: UN-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images
- Authors: Zhen Chen, Qing Xu, Xinyu Liu, Yixuan Yuan
- Abstract summary: In digital pathology, precise nuclei segmentation is pivotal yet challenged by the diversity of tissue types, staining protocols, and imaging conditions.
We propose the Universal prompt-free SAM framework for Nuclei segmentation (UN-SAM)
UN-SAM with exceptional performance surpasses state-of-the-arts in nuclei instance and semantic segmentation, especially the generalization capability in zero-shot scenarios.
- Score: 47.59627416801523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In digital pathology, precise nuclei segmentation is pivotal yet challenged
by the diversity of tissue types, staining protocols, and imaging conditions.
Recently, the segment anything model (SAM) revealed overwhelming performance in
natural scenarios and impressive adaptation to medical imaging. Despite these
advantages, the reliance of labor-intensive manual annotation as segmentation
prompts severely hinders their clinical applicability, especially for nuclei
image analysis containing massive cells where dense manual prompts are
impractical. To overcome the limitations of current SAM methods while retaining
the advantages, we propose the Universal prompt-free SAM framework for Nuclei
segmentation (UN-SAM), by providing a fully automated solution with remarkable
generalization capabilities. Specifically, to eliminate the labor-intensive
requirement of per-nuclei annotations for prompt, we devise a multi-scale
Self-Prompt Generation (SPGen) module to revolutionize clinical workflow by
automatically generating high-quality mask hints to guide the segmentation
tasks. Moreover, to unleash the generalization capability of SAM across a
variety of nuclei images, we devise a Domain-adaptive Tuning Encoder
(DT-Encoder) to seamlessly harmonize visual features with domain-common and
domain-specific knowledge, and further devise a Domain Query-enhanced Decoder
(DQ-Decoder) by leveraging learnable domain queries for segmentation decoding
in different nuclei domains. Extensive experiments prove that UN-SAM with
exceptional performance surpasses state-of-the-arts in nuclei instance and
semantic segmentation, especially the generalization capability in zero-shot
scenarios. The source code is available at
https://github.com/CUHK-AIM-Group/UN-SAM.
Related papers
- Generalizing Segmentation Foundation Model Under Sim-to-real Domain-shift for Guidewire Segmentation in X-ray Fluoroscopy [1.4353812560047192]
Sim-to-real domain adaptation approaches utilize synthetic data from simulations, offering a cost-effective solution.
We propose a strategy to adapt SAM to X-ray fluoroscopy guidewire segmentation without any annotation on the target domain.
Our method surpasses both pre-trained SAM and many state-of-the-art domain adaptation techniques by a large margin.
arXiv Detail & Related papers (2024-10-09T21:59:48Z) - Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation [49.5901368256326]
We propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) in segmenting medical images.
Our DAPSAM achieves state-of-the-art performance on two medical image segmentation tasks with different modalities.
arXiv Detail & Related papers (2024-09-19T07:28:33Z) - NuSegDG: Integration of Heterogeneous Space and Gaussian Kernel for Domain-Generalized Nuclei Segmentation [9.332333405703732]
We propose a domain-generalizable framework for nuclei image segmentation, abbreviated to NuSegDG.
HS-Adapter learns multi-dimensional feature representations of different nuclei domains by injecting a small number of trainable parameters into the image encoder of SAM.
GKP-Encoder generates density maps driven by a single point, which guides segmentation predictions by mixing position prompts and semantic prompts.
arXiv Detail & Related papers (2024-08-21T17:19:23Z) - ESP-MedSAM: Efficient Self-Prompting SAM for Universal Domain-Generalized Medical Image Segmentation [18.388979166848962]
Segment Anything Model (SAM) has demonstrated its potential in both settings.
We propose an efficient self-prompting SAM for universal domain-generalized medical image segmentation, named ESP-MedSAM.
ESP-MedSAM outperforms state-of-the-arts in diverse medical imaging segmentation tasks.
arXiv Detail & Related papers (2024-07-19T09:32:30Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for
Nuclei Segmentation [39.81051783009144]
Segment Any Cell (SAC) is an innovative framework that enhances SAM for nuclei segmentation.
SAC integrates a Low-Rank Adaptation (LoRA) within the attention layer of the Transformer to improve the fine-tuning process.
Our contributions include a novel prompt generation strategy, automated adaptability for diverse segmentation tasks, and a versatile framework for semantic segmentation challenges.
arXiv Detail & Related papers (2024-01-24T04:23:17Z) - Unleashing the Power of Prompt-driven Nucleus Instance Segmentation [12.827503504028629]
Segment Anything Model (SAM) has earned huge attention in medical image segmentation.
We present a novel prompt-driven framework that consists of a nucleus prompter and SAM for automatic nucleus instance segmentation.
Our proposed method sets a new state-of-the-art performance on three challenging benchmarks.
arXiv Detail & Related papers (2023-11-27T15:46:47Z) - SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation [65.52097667738884]
We introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to integrate surgical-specific information with SAM's pre-trained knowledge for improved generalisation.
Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes.
In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning.
arXiv Detail & Related papers (2023-08-17T02:51:01Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.