Boosting Few-Shot Semantic Segmentation Via Segment Anything Model
- URL: http://arxiv.org/abs/2401.09826v2
- Date: Sat, 20 Jan 2024 07:56:19 GMT
- Title: Boosting Few-Shot Semantic Segmentation Via Segment Anything Model
- Authors: Chen-Bin Feng, Qi Lai, Kangdao Liu, Houcheng Su, Chi-Man Vong
- Abstract summary: In semantic segmentation, accurate prediction masks are crucial for downstream tasks such as medical image analysis and image editing.
Due to the lack of annotated data, few-shot semantic segmentation (FSS) performs poorly in predicting masks with precise contours.
We propose FSS-SAM to boost FSS methods by addressing the issue of inaccurate contour.
- Score: 8.773067974503123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In semantic segmentation, accurate prediction masks are crucial for
downstream tasks such as medical image analysis and image editing. Due to the
lack of annotated data, few-shot semantic segmentation (FSS) performs poorly in
predicting masks with precise contours. Recently, we have noticed that the
large foundation model segment anything model (SAM) performs well in processing
detailed features. Inspired by SAM, we propose FSS-SAM to boost FSS methods by
addressing the issue of inaccurate contour. The FSS-SAM is training-free. It
works as a post-processing tool for any FSS methods and can improve the
accuracy of predicted masks. Specifically, we use predicted masks from FSS
methods to generate prompts and then use SAM to predict new masks. To avoid
predicting wrong masks with SAM, we propose a prediction result selection (PRS)
algorithm. The algorithm can remarkably decrease wrong predictions. Experiment
results on public datasets show that our method is superior to base FSS methods
in both quantitative and qualitative aspects.
Related papers
- SAMRefiner: Taming Segment Anything Model for Universal Mask Refinement [40.37217744643069]
We propose a universal and efficient approach by adapting SAM to the mask refinement task.
Specifically, we introduce a multi-prompt excavation strategy to mine diverse input prompts for SAM.
We extend our method to SAMRefiner++ by introducing an additional IoU adaption step to further boost the performance of the generic SAMRefiner on the target dataset.
arXiv Detail & Related papers (2025-02-10T18:33:15Z) - SAM-MPA: Applying SAM to Few-shot Medical Image Segmentation using Mask Propagation and Auto-prompting [6.739803086387235]
Medical image segmentation often faces the challenge of prohibitively expensive annotation costs.
We propose leveraging the Segment Anything Model (SAM), pre-trained on over 1 billion masks.
We develop SAM-MPA, an innovative SAM-based framework for few-shot medical image segmentation.
arXiv Detail & Related papers (2024-11-26T12:12:12Z) - Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning [63.55145330447408]
Segment Anything Model (SAM) has made great progress in anomaly segmentation tasks due to its impressive generalization ability.
Existing methods that directly apply SAM through prompting often overlook the domain shift issue.
We propose a novel Self-Perceptinon Tuning (SPT) method, aiming to enhance SAM's perception capability for anomaly segmentation.
arXiv Detail & Related papers (2024-11-26T08:33:25Z) - Bridge the Points: Graph-based Few-shot Segment Anything Semantically [79.1519244940518]
Recent advancements in pre-training techniques have enhanced the capabilities of vision foundation models.
Recent studies extend the SAM to Few-shot Semantic segmentation (FSS)
We propose a simple yet effective approach based on graph analysis.
arXiv Detail & Related papers (2024-10-09T15:02:28Z) - PointSAM: Pointly-Supervised Segment Anything Model for Remote Sensing Images [16.662173255725463]
We propose a novel Pointly-supervised Segment Anything Model named PointSAM.
We conduct experiments on RSI datasets, including WHU, HRSID, and NWPU VHR-10.
The results show that our method significantly outperforms direct testing with SAM, SAM2, and other comparison methods.
arXiv Detail & Related papers (2024-09-20T11:02:18Z) - From Generalization to Precision: Exploring SAM for Tool Segmentation in
Surgical Environments [7.01085327371458]
We argue that Segment Anything Model drastically over-segment images with high corruption levels, resulting in degraded performance.
We employ the ground-truth tool mask to analyze the results of SAM when the best single mask is selected as prediction.
We analyze the Endovis18 and Endovis17 instrument segmentation datasets using synthetic corruptions of various strengths and an In-House dataset featuring counterfactually created real-world corruptions.
arXiv Detail & Related papers (2024-02-28T01:33:49Z) - Systematic Investigation of Sparse Perturbed Sharpness-Aware
Minimization Optimizer [158.2634766682187]
Deep neural networks often suffer from poor generalization due to complex and non- unstructured loss landscapes.
SharpnessAware Minimization (SAM) is a popular solution that smooths the loss by minimizing the change of landscape when adding a perturbation.
In this paper, we propose Sparse SAM (SSAM), an efficient and effective training scheme that achieves perturbation by a binary mask.
arXiv Detail & Related papers (2023-06-30T09:33:41Z) - How to Efficiently Adapt Large Segmentation Model(SAM) to Medical Images [15.181219203629643]
Segment Anything (SAM) exhibits impressive capabilities in zero-shot segmentation for natural images.
However, when applied to medical images, SAM suffers from noticeable performance drop.
In this work, we propose to freeze SAM encoder and finetune a lightweight task-specific prediction head.
arXiv Detail & Related papers (2023-06-23T18:34:30Z) - Segment Anything in High Quality [116.39405160133315]
We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability.
Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation.
We show the efficacy of HQ-SAM in a suite of 10 diverse segmentation datasets across different downstream tasks, where 8 out of them are evaluated in a zero-shot transfer protocol.
arXiv Detail & Related papers (2023-06-02T14:23:59Z) - Improving Sharpness-Aware Minimization with Fisher Mask for Better
Generalization on Language Models [93.85178920914721]
Fine-tuning large pretrained language models on a limited training corpus usually suffers from poor computation.
We propose a novel optimization procedure, namely FSAM, which introduces a Fisher mask to improve the efficiency and performance of SAM.
We show that FSAM consistently outperforms the vanilla SAM by 0.671.98 average score among four different pretrained models.
arXiv Detail & Related papers (2022-10-11T14:53:58Z) - Improving Self-supervised Pre-training via a Fully-Explored Masked
Language Model [57.77981008219654]
Masked Language Model (MLM) framework has been widely adopted for self-supervised language pre-training.
We propose a fully-explored masking strategy, where a text sequence is divided into a certain number of non-overlapping segments.
arXiv Detail & Related papers (2020-10-12T21:28:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.