Evaluation and improvement of Segment Anything Model for interactive
histopathology image segmentation
- URL: http://arxiv.org/abs/2310.10493v1
- Date: Mon, 16 Oct 2023 15:17:06 GMT
- Title: Evaluation and improvement of Segment Anything Model for interactive
histopathology image segmentation
- Authors: SeungKyu Kim, Hyun-Jic Oh, Seonghui Min and Won-Ki Jeong
- Abstract summary: The Segment Anything Model (SAM) is a foundational model for image segmentation.
We evaluate SAM's performance in zero-shot and fine-tuned scenarios on histopathology data.
We propose a modification of SAM's decoder to make it useful for interactive histology image segmentation.
- Score: 3.677055050765245
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the emergence of the Segment Anything Model (SAM) as a foundational
model for image segmentation, its application has been extensively studied
across various domains, including the medical field. However, its potential in
the context of histopathology data, specifically in region segmentation, has
received relatively limited attention. In this paper, we evaluate SAM's
performance in zero-shot and fine-tuned scenarios on histopathology data, with
a focus on interactive segmentation. Additionally, we compare SAM with other
state-of-the-art interactive models to assess its practical potential and
evaluate its generalization capability with domain adaptability. In the
experimental results, SAM exhibits a weakness in segmentation performance
compared to other models while demonstrating relative strengths in terms of
inference time and generalization capability. To improve SAM's limited local
refinement ability and to enhance prompt stability while preserving its core
strengths, we propose a modification of SAM's decoder. The experimental results
suggest that the proposed modification is effective to make SAM useful for
interactive histology image segmentation. The code is available at
\url{https://github.com/hvcl/SAM_Interactive_Histopathology}
Related papers
- ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation [52.172885882728174]
In medical imaging contexts, it is not uncommon for human experts to rectify segmentations of specific test samples after SAM generates its segmentation predictions.
We introduce a novel approach that leverages the advantages of online machine learning to enhance Segment Anything (SA) during test time.
We employ rectified annotations to perform online learning, with the aim of improving the segmentation quality of SA on medical images.
arXiv Detail & Related papers (2024-06-03T03:16:25Z) - Segment Any Medical Model Extended [39.80956010574076]
We introduce SAMM Extended (SAMME), a platform that integrates new SAM variant models, adopts faster communication protocols, accommodates new interactive modes, and allows for fine-tuning of subcomponents of the models.
These features can expand the potential of foundation models like SAM, and the results can be translated to applications such as image-guided therapy, mixed reality interaction, robotic navigation, and data augmentation.
arXiv Detail & Related papers (2024-03-26T21:37:25Z) - Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few
Exemplars [19.725817146049707]
The Segment Anything Model (SAM) has demonstrated remarkable capabilities of scaled-up segmentation models.
However, the adoption of foundational models in the medical domain presents a challenge due to the difficulty and expense of labeling sufficient data.
This paper introduces an efficient and practical approach for fine-tuning SAM using a limited number of exemplars.
arXiv Detail & Related papers (2023-08-27T15:21:25Z) - SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT [3.2495192768429924]
The Segment Anything Model (SAM) has gained significant attention in the field of image segmentation.
We conduct a comprehensive evaluation of SAM and its adaptations on a large-scale public dataset of OCTs from RETOUCH challenge.
We showcase adapted SAM's efficacy as a powerful segmentation model in retinal OCT scans, although still lagging behind established methods in some circumstances.
arXiv Detail & Related papers (2023-08-18T06:26:22Z) - RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation [53.4319652364256]
This paper presents the RefSAM model, which explores the potential of SAM for referring video object segmentation.
Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-RValModal.
We employ a parameter-efficient tuning strategy to align and fuse the language and vision features effectively.
arXiv Detail & Related papers (2023-07-03T13:21:58Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Segment Anything Model for Medical Image Analysis: an Experimental Study [19.95972201734614]
Segment Anything Model (SAM) is a foundation model that is intended to segment user-defined objects of interest in an interactive manner.
We evaluate SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies.
arXiv Detail & Related papers (2023-04-20T17:50:18Z) - SAM.MD: Zero-shot medical image segmentation capabilities of the Segment
Anything Model [1.1221592576472588]
We evaluate the zero-shot capabilities of the Segment Anything Model for medical image segmentation.
We show that SAM generalizes well to CT data, making it a potential catalyst for the advancement of semi-automatic segmentation tools.
arXiv Detail & Related papers (2023-04-10T18:20:29Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.