SAM-Path: A Segment Anything Model for Semantic Segmentation in Digital
Pathology
- URL: http://arxiv.org/abs/2307.09570v1
- Date: Wed, 12 Jul 2023 20:15:25 GMT
- Title: SAM-Path: A Segment Anything Model for Semantic Segmentation in Digital
Pathology
- Authors: Jingwei Zhang, Ke Ma, Saarthak Kapse, Joel Saltz, Maria Vakalopoulou,
Prateek Prasanna, Dimitris Samaras
- Abstract summary: Foundation models, such as the Segment Anything Model (SAM), have been recently proposed for universal use in segmentation tasks.
In this work, we adapt SAM for semantic segmentation by introducing trainable class prompts, followed by further enhancements through the incorporation of a pathology foundation model.
Our framework, SAM-Path enhances SAM's ability to conduct semantic segmentation in digital pathology without human input prompts.
- Score: 28.62539784951823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentations of pathological entities have crucial clinical value
in computational pathology workflows. Foundation models, such as the Segment
Anything Model (SAM), have been recently proposed for universal use in
segmentation tasks. SAM shows remarkable promise in instance segmentation on
natural images. However, the applicability of SAM to computational pathology
tasks is limited due to the following factors: (1) lack of comprehensive
pathology datasets used in SAM training and (2) the design of SAM is not
inherently optimized for semantic segmentation tasks. In this work, we adapt
SAM for semantic segmentation by introducing trainable class prompts, followed
by further enhancements through the incorporation of a pathology encoder,
specifically a pathology foundation model. Our framework, SAM-Path enhances
SAM's ability to conduct semantic segmentation in digital pathology without
human input prompts. Through experiments on two public pathology datasets, the
BCSS and the CRAG datasets, we demonstrate that the fine-tuning with trainable
class prompts outperforms vanilla SAM with manual prompts and post-processing
by 27.52% in Dice score and 71.63% in IOU. On these two datasets, the proposed
additional pathology foundation model further achieves a relative improvement
of 5.07% to 5.12% in Dice score and 4.50% to 8.48% in IOU.
Related papers
- Adapting Segment Anything Model for Unseen Object Instance Segmentation [70.60171342436092]
Unseen Object Instance (UOIS) is crucial for autonomous robots operating in unstructured environments.
We propose UOIS-SAM, a data-efficient solution for the UOIS task.
UOIS-SAM integrates two key components: (i) a Heatmap-based Prompt Generator (HPG) to generate class-agnostic point prompts with precise foreground prediction, and (ii) a Hierarchical Discrimination Network (HDNet) that adapts SAM's mask decoder.
arXiv Detail & Related papers (2024-09-23T19:05:50Z) - Path-SAM2: Transfer SAM2 for digital pathology semantic segmentation [6.721564277355789]
We propose Path-SAM2, which for the first time adapts the SAM2 model to cater to the task of pathological semantic segmentation.
We integrate the largest pretrained vision encoder for histopathology (UNI) with the original SAM2 encoder, adding more pathology-based prior knowledge.
In three adenoma pathological datasets, Path-SAM2 has achieved state-of-the-art performance.
arXiv Detail & Related papers (2024-08-07T09:30:51Z) - SAM-CP: Marrying SAM with Composable Prompts for Versatile Segmentation [88.80792308991867]
Segment Anything model (SAM) has shown ability to group image pixels into patches, but applying it to semantic-aware segmentation still faces major challenges.
This paper presents SAM-CP, a simple approach that establishes two types of composable prompts beyond SAM and composes them for versatile segmentation.
Experiments show that SAM-CP achieves semantic, instance, and panoptic segmentation in both open and closed domains.
arXiv Detail & Related papers (2024-07-23T17:47:25Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Guided Prompting in SAM for Weakly Supervised Cell Segmentation in
Histopathological Images [27.14641973632063]
This paper focuses on using weak supervision -- annotation from related tasks -- to induce a segmenter.
Recent foundation models, such as Segment Anything (SAM), can use prompts to leverage additional supervision during inference.
All SAM-based solutions hugely outperform existing weakly supervised image segmentation models, obtaining 9-15 pt Dice gains.
arXiv Detail & Related papers (2023-11-29T11:18:48Z) - Evaluation and improvement of Segment Anything Model for interactive
histopathology image segmentation [3.677055050765245]
The Segment Anything Model (SAM) is a foundational model for image segmentation.
We evaluate SAM's performance in zero-shot and fine-tuned scenarios on histopathology data.
We propose a modification of SAM's decoder to make it useful for interactive histology image segmentation.
arXiv Detail & Related papers (2023-10-16T15:17:06Z) - nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance [12.169801149021566]
The Segment Anything Model (SAM) has emerged as a versatile tool for image segmentation without specific domain training.
Traditional models like nnUNet perform automatic segmentation during inference but need extensive domain-specific training.
We propose nnSAM, integrating SAM's robust feature extraction with nnUNet's automatic configuration to enhance segmentation accuracy on small datasets.
arXiv Detail & Related papers (2023-09-29T04:26:25Z) - Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few
Exemplars [19.725817146049707]
The Segment Anything Model (SAM) has demonstrated remarkable capabilities of scaled-up segmentation models.
However, the adoption of foundational models in the medical domain presents a challenge due to the difficulty and expense of labeling sufficient data.
This paper introduces an efficient and practical approach for fine-tuning SAM using a limited number of exemplars.
arXiv Detail & Related papers (2023-08-27T15:21:25Z) - SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation [65.52097667738884]
We introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to integrate surgical-specific information with SAM's pre-trained knowledge for improved generalisation.
Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes.
In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning.
arXiv Detail & Related papers (2023-08-17T02:51:01Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.