Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot
Segmentation on Whole Slide Imaging
- URL: http://arxiv.org/abs/2304.04155v1
- Date: Sun, 9 Apr 2023 04:06:59 GMT
- Title: Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot
Segmentation on Whole Slide Imaging
- Authors: Ruining Deng, Can Cui, Quan Liu, Tianyuan Yao, Lucas W. Remedios,
Shunxing Bao, Bennett A. Landman, Lee E. Wheless, Lori A. Coburn, Keith T.
Wilson, Yaohong Wang, Shilin Zhao, Agnes B. Fogo, Haichun Yang, Yucheng Tang,
Yuankai Huo
- Abstract summary: The segment anything model (SAM) was released as a foundation model for image segmentation.
We evaluate the zero-shot segmentation performance of SAM model on representative segmentation tasks on whole slide imaging (WSI)
The results suggest that the zero-shot SAM model achieves remarkable segmentation performance for large connected objects.
- Score: 12.533476185972527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The segment anything model (SAM) was released as a foundation model for image
segmentation. The promptable segmentation model was trained by over 1 billion
masks on 11M licensed and privacy-respecting images. The model supports
zero-shot image segmentation with various segmentation prompts (e.g., points,
boxes, masks). It makes the SAM attractive for medical image analysis,
especially for digital pathology where the training data are rare. In this
study, we evaluate the zero-shot segmentation performance of SAM model on
representative segmentation tasks on whole slide imaging (WSI), including (1)
tumor segmentation, (2) non-tumor tissue segmentation, (3) cell nuclei
segmentation. Core Results: The results suggest that the zero-shot SAM model
achieves remarkable segmentation performance for large connected objects.
However, it does not consistently achieve satisfying performance for dense
instance object segmentation, even with 20 prompts (clicks/boxes) on each
image. We also summarized the identified limitations for digital pathology: (1)
image resolution, (2) multiple scales, (3) prompt selection, and (4) model
fine-tuning. In the future, the few-shot fine-tuning with images from
downstream pathological segmentation tasks might help the model to achieve
better performance in dense object segmentation.
Related papers
- UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation [64.01742988773745]
An increasing privacy concern exists regarding training large-scale image segmentation models on unauthorized private data.
We exploit the concept of unlearnable examples to make images unusable to model training by generating and adding unlearnable noise into the original images.
We empirically verify the effectiveness of UnSeg across 6 mainstream image segmentation tasks, 10 widely used datasets, and 7 different network architectures.
arXiv Detail & Related papers (2024-10-13T16:34:46Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - OMG-Seg: Is One Model Good Enough For All Segmentation? [83.17068644513144]
OMG-Seg is a transformer-based encoder-decoder architecture with task-specific queries and outputs.
We show that OMG-Seg can support over ten distinct segmentation tasks and yet significantly reduce computational and parameter overhead.
arXiv Detail & Related papers (2024-01-18T18:59:34Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - SamDSK: Combining Segment Anything Model with Domain-Specific Knowledge
for Semi-Supervised Learning in Medical Image Segmentation [27.044797468878837]
The Segment Anything Model (SAM) exhibits a capability to segment a wide array of objects in natural images.
We propose a novel method that combines the SAM with domain-specific knowledge for reliable utilization of unlabeled images.
Our work initiates a new direction of semi-supervised learning for medical image segmentation.
arXiv Detail & Related papers (2023-08-26T04:46:10Z) - Semantic-SAM: Segment and Recognize Anything at Any Granularity [83.64686655044765]
We introduce Semantic-SAM, a universal image segmentation model to enable segment and recognize anything at any desired granularity.
We consolidate multiple datasets across three granularities and introduce decoupled classification for objects and parts.
For the multi-granularity capability, we propose a multi-choice learning scheme during training, enabling each click to generate masks at multiple levels.
arXiv Detail & Related papers (2023-07-10T17:59:40Z) - Input Augmentation with SAM: Boosting Medical Image Segmentation with
Segmentation Foundation Model [36.015065439244495]
The Segment Anything Model (SAM) is a recently developed large model for general-purpose segmentation for computer vision tasks.
SAM was trained using 11 million images with over 1 billion masks and can produce segmentation results for a wide range of objects in natural scene images.
This paper shows that although SAM does not immediately give high-quality segmentation for medical image data, its generated masks, features, and stability scores are useful for building and training better medical image segmentation models.
arXiv Detail & Related papers (2023-04-22T07:11:53Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - SAM.MD: Zero-shot medical image segmentation capabilities of the Segment
Anything Model [1.1221592576472588]
We evaluate the zero-shot capabilities of the Segment Anything Model for medical image segmentation.
We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools.
arXiv Detail & Related papers (2023-04-10T18:20:29Z) - Segment Anything [108.16489338211093]
We build the largest segmentation dataset to date, with over 1 billion masks on 11M licensed and privacy respecting images.
The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks.
We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive.
arXiv Detail & Related papers (2023-04-05T17:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.