Polyp-SAM: Transfer SAM for Polyp Segmentation
- URL: http://arxiv.org/abs/2305.00293v1
- Date: Sat, 29 Apr 2023 16:11:06 GMT
- Title: Polyp-SAM: Transfer SAM for Polyp Segmentation
- Authors: Yuheng Li, Mingzhe Hu, and Xiaofeng Yang
- Abstract summary: Segment Anything Model (SAM) has recently gained much attention in both natural and medical image segmentation.
We propose Poly-SAM, a finetuned SAM model for polyp segmentation, and compare its performance to several state-of-the-art polyp segmentation models.
Our Polyp-SAM achieves state-of-the-art performance on two datasets and impressive performance on three datasets, with dice scores all above 88%.
- Score: 2.4492242722754107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Colon polyps are considered important precursors for colorectal cancer.
Automatic segmentation of colon polyps can significantly reduce the
misdiagnosis of colon cancer and improve physician annotation efficiency. While
many methods have been proposed for polyp segmentation, training large-scale
segmentation networks with limited colonoscopy data remains a challenge.
Recently, the Segment Anything Model (SAM) has recently gained much attention
in both natural and medical image segmentation. SAM demonstrates superior
performance in several image benchmarks and therefore shows great potential for
medical image segmentation. In this study, we propose Poly-SAM, a finetuned SAM
model for polyp segmentation, and compare its performance to several
state-of-the-art polyp segmentation models. We also compare two transfer
learning strategies of SAM with and without finetuning its encoders. Evaluated
on five public datasets, our Polyp-SAM achieves state-of-the-art performance on
two datasets and impressive performance on three datasets, with dice scores all
above 88%. This study demonstrates the great potential of adapting SAM to
medical image segmentation tasks. We plan to release the code and model weights
for this paper at: https://github.com/ricklisz/Polyp-SAM.
Related papers
- Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model [18.61909523131399]
This paper presents a novel approach to polyp segmentation by integrating the Segment Anything Model (SAM 2) with the YOLOv8 model.
Our method leverages YOLOv8's bounding box predictions to autonomously generate input prompts for SAM 2, thereby reducing the need for manual annotations.
We conducted exhaustive tests on five benchmark colonoscopy image datasets and two colonoscopy video datasets, demonstrating that our method exceeds state-of-the-art models in both image and video segmentation tasks.
arXiv Detail & Related papers (2024-09-14T17:11:37Z) - Polyp SAM 2: Advancing Zero shot Polyp Segmentation in Colorectal Cancer Detection [18.61909523131399]
Polyp segmentation plays a crucial role in the early detection and diagnosis of colorectal cancer.
Recently, Meta AI Research released a general Segment Anything Model 2 (SAM 2), which has demonstrated promising performance in several segmentation tasks.
In this manuscript, we evaluate the performance of SAM 2 in segmenting polyps under various prompted settings.
arXiv Detail & Related papers (2024-08-12T02:10:18Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - Segment Anything Model-guided Collaborative Learning Network for
Scribble-supervised Polyp Segmentation [45.15517909664628]
Polyp segmentation plays a vital role in accurately locating polyps at an early stage.
pixel-wise annotation for polyp images by physicians during the diagnosis is both time-consuming and expensive.
We propose a novel SAM-guided Collaborative Learning Network (SAM-CLNet) for scribble-supervised polyp segmentation.
arXiv Detail & Related papers (2023-12-01T03:07:13Z) - MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image
Segmentation [58.53672866662472]
We introduce a modality-agnostic SAM adaptation framework, named as MA-SAM.
Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments.
By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data.
arXiv Detail & Related papers (2023-09-16T02:41:53Z) - Polyp-SAM++: Can A Text Guided SAM Perform Better for Polyp
Segmentation? [0.0]
Polyp-SAM++, a text prompt-aided SAM, can better utilize a SAM using text prompting for robust and more precise polyp segmentation.
We will evaluate the performance of a text-guided SAM on the polyp segmentation task on benchmark datasets.
arXiv Detail & Related papers (2023-08-12T17:45:39Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Can SAM Segment Polyps? [43.259797663208865]
Recently, Meta AI Research releases a general Segment Anything Model (SAM), which has demonstrated promising performance in several segmentation tasks.
In this report, we evaluate the performance of SAM in segmenting polyps, in which SAM is under unprompted settings.
arXiv Detail & Related papers (2023-04-15T15:41:10Z) - PraNet: Parallel Reverse Attention Network for Polyp Segmentation [155.93344756264824]
We propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images.
We first aggregate the features in high-level layers using a parallel partial decoder (PPD)
In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues.
arXiv Detail & Related papers (2020-06-13T08:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.