UN-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images
- URL: http://arxiv.org/abs/2402.16663v1
- Date: Mon, 26 Feb 2024 15:35:18 GMT
- Title: UN-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images
- Authors: Zhen Chen, Qing Xu, Xinyu Liu, Yixuan Yuan
- Abstract summary: In digital pathology, precise nuclei segmentation is pivotal yet challenged by the diversity of tissue types, staining protocols, and imaging conditions.
We propose the Universal prompt-free SAM framework for Nuclei segmentation (UN-SAM)
UN-SAM with exceptional performance surpasses state-of-the-arts in nuclei instance and semantic segmentation, especially the generalization capability in zero-shot scenarios.
- Score: 47.59627416801523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In digital pathology, precise nuclei segmentation is pivotal yet challenged
by the diversity of tissue types, staining protocols, and imaging conditions.
Recently, the segment anything model (SAM) revealed overwhelming performance in
natural scenarios and impressive adaptation to medical imaging. Despite these
advantages, the reliance of labor-intensive manual annotation as segmentation
prompts severely hinders their clinical applicability, especially for nuclei
image analysis containing massive cells where dense manual prompts are
impractical. To overcome the limitations of current SAM methods while retaining
the advantages, we propose the Universal prompt-free SAM framework for Nuclei
segmentation (UN-SAM), by providing a fully automated solution with remarkable
generalization capabilities. Specifically, to eliminate the labor-intensive
requirement of per-nuclei annotations for prompt, we devise a multi-scale
Self-Prompt Generation (SPGen) module to revolutionize clinical workflow by
automatically generating high-quality mask hints to guide the segmentation
tasks. Moreover, to unleash the generalization capability of SAM across a
variety of nuclei images, we devise a Domain-adaptive Tuning Encoder
(DT-Encoder) to seamlessly harmonize visual features with domain-common and
domain-specific knowledge, and further devise a Domain Query-enhanced Decoder
(DQ-Decoder) by leveraging learnable domain queries for segmentation decoding
in different nuclei domains. Extensive experiments prove that UN-SAM with
exceptional performance surpasses state-of-the-arts in nuclei instance and
semantic segmentation, especially the generalization capability in zero-shot
scenarios. The source code is available at
https://github.com/CUHK-AIM-Group/UN-SAM.
Related papers
- CycleSAM: One-Shot Surgical Scene Segmentation using Cycle-Consistent Feature Matching to Prompt SAM [2.9500242602590565]
CycleSAM is an approach for one-shot surgical scene segmentation using the training image-mask pair at test-time.
We employ a ResNet50 encoder pretrained on surgical images in a self-supervised fashion, thereby maintaining high label-efficiency.
arXiv Detail & Related papers (2024-07-09T12:08:07Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - Pathological Primitive Segmentation Based on Visual Foundation Model with Zero-Shot Mask Generation [3.5177988631063486]
We present a novel approach that adapts pre-trained natural image encoders of SAM for detection-based region proposals.
The entire base framework, SAM, requires no additional training or fine-tuning but could produce an end-to-end result for two fundamental segmentation tasks in pathology.
arXiv Detail & Related papers (2024-04-12T16:29:49Z) - Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for
Nuclei Segmentation [39.81051783009144]
Segment Any Cell (SAC) is an innovative framework that enhances SAM for nuclei segmentation.
SAC integrates a Low-Rank Adaptation (LoRA) within the attention layer of the Transformer to improve the fine-tuning process.
Our contributions include a novel prompt generation strategy, automated adaptability for diverse segmentation tasks, and a versatile framework for semantic segmentation challenges.
arXiv Detail & Related papers (2024-01-24T04:23:17Z) - Unleashing the Power of Prompt-driven Nucleus Instance Segmentation [12.827503504028629]
Segment Anything Model (SAM) has earned huge attention in medical image segmentation.
We present a novel prompt-driven framework that consists of a nucleus prompter and SAM for automatic nucleus instance segmentation.
Our proposed method sets a new state-of-the-art performance on three challenging benchmarks.
arXiv Detail & Related papers (2023-11-27T15:46:47Z) - Beyond Adapting SAM: Towards End-to-End Ultrasound Image Segmentation via Auto Prompting [10.308637269138146]
We propose SAMUS as a universal model tailored for ultrasound image segmentation.
We further enable it to work in an end-to-end manner denoted as AutoSAMUS.
AutoSAMUS is realized by introducing an auto prompt generator (APG) to replace the manual prompt encoder of SAMUS.
arXiv Detail & Related papers (2023-09-13T09:15:20Z) - SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation [65.52097667738884]
We introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to integrate surgical-specific information with SAM's pre-trained knowledge for improved generalisation.
Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes.
In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning.
arXiv Detail & Related papers (2023-08-17T02:51:01Z) - Domain Adaptive Nuclei Instance Segmentation and Classification via
Category-aware Feature Alignment and Pseudo-labelling [65.40672505658213]
We propose a novel deep neural network, namely Category-Aware feature alignment and Pseudo-Labelling Network (CAPL-Net) for UDA nuclei instance segmentation and classification.
Our approach outperforms state-of-the-art UDA methods with a remarkable margin.
arXiv Detail & Related papers (2022-07-04T07:05:06Z) - TransUNet: Transformers Make Strong Encoders for Medical Image
Segmentation [78.01570371790669]
Medical image segmentation is an essential prerequisite for developing healthcare systems.
On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard.
We propose TransUNet, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation.
arXiv Detail & Related papers (2021-02-08T16:10:50Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.