Polyp-SAM++: Can A Text Guided SAM Perform Better for Polyp
Segmentation?
- URL: http://arxiv.org/abs/2308.06623v1
- Date: Sat, 12 Aug 2023 17:45:39 GMT
- Title: Polyp-SAM++: Can A Text Guided SAM Perform Better for Polyp
Segmentation?
- Authors: Risab Biswas
- Abstract summary: Polyp-SAM++, a text prompt-aided SAM, can better utilize a SAM using text prompting for robust and more precise polyp segmentation.
We will evaluate the performance of a text-guided SAM on the polyp segmentation task on benchmark datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Meta recently released SAM (Segment Anything Model) which is a
general-purpose segmentation model. SAM has shown promising results in a wide
variety of segmentation tasks including medical image segmentation. In the
field of medical image segmentation, polyp segmentation holds a position of
high importance, thus creating a model which is robust and precise is quite
challenging. Polyp segmentation is a fundamental task to ensure better
diagnosis and cure of colorectal cancer. As such in this study, we will see how
Polyp-SAM++, a text prompt-aided SAM, can better utilize a SAM using text
prompting for robust and more precise polyp segmentation. We will evaluate the
performance of a text-guided SAM on the polyp segmentation task on benchmark
datasets. We will also compare the results of text-guided SAM vs unprompted
SAM. With this study, we hope to advance the field of polyp segmentation and
inspire more, intriguing research. The code and other details will be made
publically available soon at https://github.com/RisabBiswas/Polyp-SAM++.
Related papers
- Polyp SAM 2: Advancing Zero shot Polyp Segmentation in Colorectal Cancer Detection [18.61909523131399]
Polyp segmentation plays a crucial role in the early detection and diagnosis of colorectal cancer.
Recently, Meta AI Research released a general Segment Anything Model 2 (SAM 2), which has demonstrated promising performance in several segmentation tasks.
In this manuscript, we evaluate the performance of SAM 2 in segmenting polyps under various prompted settings.
arXiv Detail & Related papers (2024-08-12T02:10:18Z) - SAM-CP: Marrying SAM with Composable Prompts for Versatile Segmentation [88.80792308991867]
Segment Anything model (SAM) has shown ability to group image pixels into patches, but applying it to semantic-aware segmentation still faces major challenges.
This paper presents SAM-CP, a simple approach that establishes two types of composable prompts beyond SAM and composes them for versatile segmentation.
Experiments show that SAM-CP achieves semantic, instance, and panoptic segmentation in both open and closed domains.
arXiv Detail & Related papers (2024-07-23T17:47:25Z) - ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - Moving Object Segmentation: All You Need Is SAM (and Flow) [82.78026782967959]
We investigate two models for combining SAM with optical flow that harness the segmentation power of SAM with the ability of flow to discover and group moving objects.
In the first model, we adapt SAM to take optical flow, rather than RGB, as an input. In the second, SAM takes RGB as an input, and flow is used as a segmentation prompt.
These surprisingly simple methods, without any further modifications, outperform all previous approaches by a considerable margin in both single and multi-object benchmarks.
arXiv Detail & Related papers (2024-04-18T17:59:53Z) - Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation [10.444726122035133]
We propose a simple unified framework, SaLIP, for organ segmentation.
SAM is used for part based segmentation within the image, followed by CLIP to retrieve the mask corresponding to the region of interest.
Finally, SAM is prompted by the retrieved ROI to segment a specific organ.
arXiv Detail & Related papers (2024-04-09T14:56:34Z) - Guided Prompting in SAM for Weakly Supervised Cell Segmentation in
Histopathological Images [27.14641973632063]
This paper focuses on using weak supervision -- annotation from related tasks -- to induce a segmenter.
Recent foundation models, such as Segment Anything (SAM), can use prompts to leverage additional supervision during inference.
All SAM-based solutions hugely outperform existing weakly supervised image segmentation models, obtaining 9-15 pt Dice gains.
arXiv Detail & Related papers (2023-11-29T11:18:48Z) - AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene
Segmentation [49.59991322513561]
We propose an adaptive modification of Segment-Anything (SAM) that can adjust to new datasets quickly and efficiently.
AdaptiveSAM uses free-form text as prompt and can segment the object of interest with just the label name as prompt.
Our experiments show that AdaptiveSAM outperforms current state-of-the-art methods on various medical imaging datasets.
arXiv Detail & Related papers (2023-08-07T17:12:54Z) - Polyp-SAM: Transfer SAM for Polyp Segmentation [2.4492242722754107]
Segment Anything Model (SAM) has recently gained much attention in both natural and medical image segmentation.
We propose Poly-SAM, a finetuned SAM model for polyp segmentation, and compare its performance to several state-of-the-art polyp segmentation models.
Our Polyp-SAM achieves state-of-the-art performance on two datasets and impressive performance on three datasets, with dice scores all above 88%.
arXiv Detail & Related papers (2023-04-29T16:11:06Z) - Medical SAM Adapter: Adapting Segment Anything Model for Medical Image
Segmentation [51.770805270588625]
The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation.
Recent studies and individual experiments have shown that SAM underperforms in medical image segmentation.
We propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model.
arXiv Detail & Related papers (2023-04-25T07:34:22Z) - Can SAM Segment Polyps? [43.259797663208865]
Recently, Meta AI Research releases a general Segment Anything Model (SAM), which has demonstrated promising performance in several segmentation tasks.
In this report, we evaluate the performance of SAM in segmenting polyps, in which SAM is under unprompted settings.
arXiv Detail & Related papers (2023-04-15T15:41:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.