Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for
Nuclei Segmentation
- URL: http://arxiv.org/abs/2401.13220v1
- Date: Wed, 24 Jan 2024 04:23:17 GMT
- Title: Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for
Nuclei Segmentation
- Authors: Saiyang Na, Yuzhi Guo, Feng Jiang, Hehuan Ma and Junzhou Huang
- Abstract summary: Segment Any Cell (SAC) is an innovative framework that enhances SAM for nuclei segmentation.
SAC integrates a Low-Rank Adaptation (LoRA) within the attention layer of the Transformer to improve the fine-tuning process.
Our contributions include a novel prompt generation strategy, automated adaptability for diverse segmentation tasks, and a versatile framework for semantic segmentation challenges.
- Score: 39.81051783009144
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the rapidly evolving field of AI research, foundational models like BERT
and GPT have significantly advanced language and vision tasks. The advent of
pretrain-prompting models such as ChatGPT and Segmentation Anything Model (SAM)
has further revolutionized image segmentation. However, their applications in
specialized areas, particularly in nuclei segmentation within medical imaging,
reveal a key challenge: the generation of high-quality, informative prompts is
as crucial as applying state-of-the-art (SOTA) fine-tuning techniques on
foundation models. To address this, we introduce Segment Any Cell (SAC), an
innovative framework that enhances SAM specifically for nuclei segmentation.
SAC integrates a Low-Rank Adaptation (LoRA) within the attention layer of the
Transformer to improve the fine-tuning process, outperforming existing SOTA
methods. It also introduces an innovative auto-prompt generator that produces
effective prompts to guide segmentation, a critical factor in handling the
complexities of nuclei segmentation in biomedical imaging. Our extensive
experiments demonstrate the superiority of SAC in nuclei segmentation tasks,
proving its effectiveness as a tool for pathologists and researchers. Our
contributions include a novel prompt generation strategy, automated
adaptability for diverse segmentation tasks, the innovative application of
Low-Rank Attention Adaptation in SAM, and a versatile framework for semantic
segmentation challenges.
Related papers
- MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.
Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.
We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - Image Segmentation in Foundation Model Era: A Survey [99.19456390358211]
Current research in image segmentation lacks a detailed analysis of distinct characteristics, challenges, and solutions associated with these advancements.
This survey seeks to fill this gap by providing a thorough review of cutting-edge research centered around FM-driven image segmentation.
An exhaustive overview of over 300 segmentation approaches is provided to encapsulate the breadth of current research efforts.
arXiv Detail & Related papers (2024-08-23T10:07:59Z) - A Survey on Cell Nuclei Instance Segmentation and Classification: Leveraging Context and Attention [2.574831636177296]
We conduct a survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging.
We extend both a general instance segmentation and classification method (Mask-RCNN) and a tailored cell nuclei instance segmentation and classification model (HoVer-Net) with context- and attention-based mechanisms.
Our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms should be addressed.
arXiv Detail & Related papers (2024-07-26T11:30:22Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - UN-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images [47.59627416801523]
In digital pathology, precise nuclei segmentation is pivotal yet challenged by the diversity of tissue types, staining protocols, and imaging conditions.
We propose the Universal prompt-free SAM framework for Nuclei segmentation (UN-SAM)
UN-SAM with exceptional performance surpasses state-of-the-arts in nuclei instance and semantic segmentation, especially the generalization capability in zero-shot scenarios.
arXiv Detail & Related papers (2024-02-26T15:35:18Z) - Unleashing the Power of Prompt-driven Nucleus Instance Segmentation [12.827503504028629]
Segment Anything Model (SAM) has earned huge attention in medical image segmentation.
We present a novel prompt-driven framework that consists of a nucleus prompter and SAM for automatic nucleus instance segmentation.
Our proposed method sets a new state-of-the-art performance on three challenging benchmarks.
arXiv Detail & Related papers (2023-11-27T15:46:47Z) - SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation [65.52097667738884]
We introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to integrate surgical-specific information with SAM's pre-trained knowledge for improved generalisation.
Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes.
In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning.
arXiv Detail & Related papers (2023-08-17T02:51:01Z) - Domain Adaptive Nuclei Instance Segmentation and Classification via
Category-aware Feature Alignment and Pseudo-labelling [65.40672505658213]
We propose a novel deep neural network, namely Category-Aware feature alignment and Pseudo-Labelling Network (CAPL-Net) for UDA nuclei instance segmentation and classification.
Our approach outperforms state-of-the-art UDA methods with a remarkable margin.
arXiv Detail & Related papers (2022-07-04T07:05:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.