ASPS: Augmented Segment Anything Model for Polyp Segmentation
- URL: http://arxiv.org/abs/2407.00718v1
- Date: Sun, 30 Jun 2024 14:55:32 GMT
- Title: ASPS: Augmented Segment Anything Model for Polyp Segmentation
- Authors: Huiqian Li, Dingwen Zhang, Jieru Yao, Longfei Han, Zhongyu Li, Junwei Han,
- Abstract summary: The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
- Score: 77.25557224490075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Polyp segmentation plays a pivotal role in colorectal cancer diagnosis. Recently, the emergence of the Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation, leveraging its powerful pre-training capability on large-scale datasets. However, due to the domain gap between natural and endoscopy images, SAM encounters two limitations in achieving effective performance in polyp segmentation. Firstly, its Transformer-based structure prioritizes global and low-frequency information, potentially overlooking local details, and introducing bias into the learned features. Secondly, when applied to endoscopy images, its poor out-of-distribution (OOD) performance results in substandard predictions and biased confidence output. To tackle these challenges, we introduce a novel approach named Augmented SAM for Polyp Segmentation (ASPS), equipped with two modules: Cross-branch Feature Augmentation (CFA) and Uncertainty-guided Prediction Regularization (UPR). CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge while enhancing local features and high-frequency details. Moreover, UPR ingeniously leverages SAM's IoU score to mitigate uncertainty during the training procedure, thereby improving OOD performance and domain generalization. Extensive experimental results demonstrate the effectiveness and utility of the proposed method in improving SAM's performance in polyp segmentation. Our code is available at https://github.com/HuiqianLi/ASPS.
Related papers
- Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation [49.5901368256326]
We propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) in segmenting medical images.
Our DAPSAM achieves state-of-the-art performance on two medical image segmentation tasks with different modalities.
arXiv Detail & Related papers (2024-09-19T07:28:33Z) - Adaptation of Distinct Semantics for Uncertain Areas in Polyp Segmentation [11.646574658785362]
This work presents a new novel architecture namely Adaptation of Distinct Semantics for Uncertain Areas in Polyp (ADSNet)
ADSNet modifies misclassified details and recovers weak features having the ability to vanish and not be detected at the final stage.
experimental results demonstrate the great correction and recovery ability leading to better segmentation performance compared to the other state of the art in the polyp image segmentation task.
arXiv Detail & Related papers (2024-05-13T07:41:28Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Segment Anything Model-guided Collaborative Learning Network for
Scribble-supervised Polyp Segmentation [45.15517909664628]
Polyp segmentation plays a vital role in accurately locating polyps at an early stage.
pixel-wise annotation for polyp images by physicians during the diagnosis is both time-consuming and expensive.
We propose a novel SAM-guided Collaborative Learning Network (SAM-CLNet) for scribble-supervised polyp segmentation.
arXiv Detail & Related papers (2023-12-01T03:07:13Z) - Edge-aware Feature Aggregation Network for Polyp Segmentation [40.3881565207086]
In this study, we present a novel Edge-aware Feature Aggregation Network (EFA-Net) for polyp segmentation.
EFA-Net can fully make use of cross-level and multi-scale features to enhance the performance of polyp segmentation.
Experimental results on five widely adopted colonoscopy datasets show that our EFA-Net outperforms state-of-the-art polyp segmentation methods in terms of generalization and effectiveness.
arXiv Detail & Related papers (2023-09-19T11:09:38Z) - Lesion-aware Dynamic Kernel for Polyp Segmentation [49.63274623103663]
We propose a lesion-aware dynamic network (LDNet) for polyp segmentation.
It is a traditional u-shape encoder-decoder structure incorporated with a dynamic kernel generation and updating scheme.
This simple but effective scheme endows our model with powerful segmentation performance and generalization capability.
arXiv Detail & Related papers (2023-01-12T09:53:57Z) - FCN-Transformer Feature Fusion for Polyp Segmentation [12.62213319797323]
Colonoscopy is widely recognised as the gold standard procedure for the early detection of colorectal cancer.
The manual segmentation of polyps in colonoscopy images is time-consuming.
The use of deep learning for automation of polyp segmentation has become important.
arXiv Detail & Related papers (2022-08-17T15:31:06Z) - PraNet: Parallel Reverse Attention Network for Polyp Segmentation [155.93344756264824]
We propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images.
We first aggregate the features in high-level layers using a parallel partial decoder (PPD)
In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues.
arXiv Detail & Related papers (2020-06-13T08:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.