Segment Anything for Histopathology
- URL: http://arxiv.org/abs/2502.00408v1
- Date: Sat, 01 Feb 2025 11:59:04 GMT
- Title: Segment Anything for Histopathology
- Authors: Titus Griebel, Anwai Archit, Constantin Pape,
- Abstract summary: Vision foundation models (VFMs) offer a more robust alternative for automatic and interactive segmentation.
We introduce PathoSAM, a VFM for nucleus segmentation based on training SAM on a diverse dataset.
Our models are open-source and compatible with popular tools for data annotation.
- Score: 2.6579756198224347
- License:
- Abstract: Nucleus segmentation is an important analysis task in digital pathology. However, methods for automatic segmentation often struggle with new data from a different distribution, requiring users to manually annotate nuclei and retrain data-specific models. Vision foundation models (VFMs), such as the Segment Anything Model (SAM), offer a more robust alternative for automatic and interactive segmentation. Despite their success in natural images, a foundation model for nucleus segmentation in histopathology is still missing. Initial efforts to adapt SAM have shown some success, but did not yet introduce a comprehensive model for diverse segmentation tasks. To close this gap, we introduce PathoSAM, a VFM for nucleus segmentation, based on training SAM on a diverse dataset. Our extensive experiments show that it is the new state-of-the-art model for automatic and interactive nucleus instance segmentation in histopathology. We also demonstrate how it can be adapted for other segmentation tasks, including semantic nucleus segmentation. For this task, we show that it yields results better than popular methods, while not yet beating the state-of-the-art, CellViT. Our models are open-source and compatible with popular tools for data annotation. We also provide scripts for whole-slide image segmentation. Our code and models are publicly available at https://github.com/computational-cell-analytics/patho-sam.
Related papers
- Parameter Efficient Fine-Tuning of Segment Anything Model [2.6579756198224347]
Vision foundation models, such as Segment Anything Model (SAM), address this issue through broad segmentation capabilities.
We provide an implementation of QLoRA for vision transformers and a new approach for resource-efficient finetuning of SAM.
arXiv Detail & Related papers (2025-02-01T12:39:17Z) - MedicoSAM: Towards foundation models for medical image segmentation [2.6579756198224347]
We show how to improve Segment Anything for medical images by comparing different finetuning strategies on a large and diverse dataset.
We find that the performance can be clearly improved for interactive segmentation.
Our best model, MedicoSAM, is publicly available at https://github.com/computational-cell-analytics/medico-sam.
arXiv Detail & Related papers (2025-01-20T20:40:28Z) - Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation [49.5901368256326]
We propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) in segmenting medical images.
Our DAPSAM achieves state-of-the-art performance on two medical image segmentation tasks with different modalities.
arXiv Detail & Related papers (2024-09-19T07:28:33Z) - Few-Shot Learning for Annotation-Efficient Nucleus Instance Segmentation [50.407071700154674]
We propose to formulate annotation-efficient nucleus instance segmentation from the perspective of few-shot learning (FSL)
Our work was motivated by that, with the prosperity of computational pathology, an increasing number of fully-annotated datasets are publicly accessible.
Extensive experiments on a couple of publicly accessible datasets demonstrate that SGFSIS can outperform other annotation-efficient learning baselines.
arXiv Detail & Related papers (2024-02-26T03:49:18Z) - UniCell: Universal Cell Nucleus Classification via Prompt Learning [76.11864242047074]
We propose a universal cell nucleus classification framework (UniCell)
It employs a novel prompt learning mechanism to uniformly predict the corresponding categories of pathological images from different dataset domains.
In particular, our framework adopts an end-to-end architecture for nuclei detection and classification, and utilizes flexible prediction heads for adapting various datasets.
arXiv Detail & Related papers (2024-02-20T11:50:27Z) - OMG-Seg: Is One Model Good Enough For All Segmentation? [83.17068644513144]
OMG-Seg is a transformer-based encoder-decoder architecture with task-specific queries and outputs.
We show that OMG-Seg can support over ten distinct segmentation tasks and yet significantly reduce computational and parameter overhead.
arXiv Detail & Related papers (2024-01-18T18:59:34Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot
Segmentation on Whole Slide Imaging [12.533476185972527]
The segment anything model (SAM) was released as a foundation model for image segmentation.
We evaluate the zero-shot segmentation performance of SAM model on representative segmentation tasks on whole slide imaging (WSI)
The results suggest that the zero-shot SAM model achieves remarkable segmentation performance for large connected objects.
arXiv Detail & Related papers (2023-04-09T04:06:59Z) - Evolution of Image Segmentation using Deep Convolutional Neural Network:
A Survey [0.0]
We take a glance at the evolution of both semantic and instance segmentation work based on CNN.
We have given a glimpse of some state-of-the-art panoptic segmentation models.
arXiv Detail & Related papers (2020-01-13T06:07:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.