Boosting SAM for Cross-Domain Few-Shot Segmentation via Conditional Point Sparsification
- URL: http://arxiv.org/abs/2602.05218v1
- Date: Thu, 05 Feb 2026 02:17:38 GMT
- Title: Boosting SAM for Cross-Domain Few-Shot Segmentation via Conditional Point Sparsification
- Authors: Jiahao Nie, Yun Xing, Wenbin An, Qingsong Zhao, Jiawei Shao, Yap-Peng Tan, Alex C. Kot, Shijian Lu, Xuelong Li,
- Abstract summary: We propose Point Sparsification (CPS), a training-free approach that adaptively guides SAM interactions for cross-domain images based on reference exemplars.<n>CPS outperforms existing training-free SAM-based methods across diverse CD-FSS datasets.
- Score: 116.2386061247855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motivated by the success of the Segment Anything Model (SAM) in promptable segmentation, recent studies leverage SAM to develop training-free solutions for few-shot segmentation, which aims to predict object masks in the target image based on a few reference exemplars. These SAM-based methods typically rely on point matching between reference and target images and use the matched dense points as prompts for mask prediction. However, we observe that dense points perform poorly in Cross-Domain Few-Shot Segmentation (CD-FSS), where target images are from medical or satellite domains. We attribute this issue to large domain shifts that disrupt the point-image interactions learned by SAM, and find that point density plays a crucial role under such conditions. To address this challenge, we propose Conditional Point Sparsification (CPS), a training-free approach that adaptively guides SAM interactions for cross-domain images based on reference exemplars. Leveraging ground-truth masks, the reference images provide reliable guidance for adaptively sparsifying dense matched points, enabling more accurate segmentation results. Extensive experiments demonstrate that CPS outperforms existing training-free SAM-based methods across diverse CD-FSS datasets.
Related papers
- ConformalSAM: Unlocking the Potential of Foundational Segmentation Models in Semi-Supervised Semantic Segmentation with Conformal Prediction [57.930531826380836]
This work explores whether a foundational segmentation model can address label scarcity in the pixel-level vision task as an annotator for unlabeled images.<n>We propose ConformalSAM, a novel SSSS framework which first calibrates the foundation model using the target domain's labeled data and then filters out unreliable pixel labels of unlabeled data.
arXiv Detail & Related papers (2025-07-21T17:02:57Z) - Segment Concealed Objects with Incomplete Supervision [63.637733655439334]
Incompletely-Supervised Concealed Object (ISCOS) involves segmenting objects that seamlessly blend into their surrounding environments.<n>This task remains highly challenging due to the limited supervision provided by the incompletely annotated training data.<n>In this paper, we introduce the first unified method for ISCOS to address these challenges.
arXiv Detail & Related papers (2025-06-10T16:25:15Z) - CMaP-SAM: Contraction Mapping Prior for SAM-driven Few-shot Segmentation [37.596987175531275]
Few-shot segmentation (FSS) aims to segment new classes using few annotated images.<n>Recent FSS methods have shown considerable improvements by leveraging Segment Anything Model (SAM)<n>We propose CMaP-SAM, a novel framework that introduces contraction mapping theory to optimize position priors.
arXiv Detail & Related papers (2025-04-07T13:19:16Z) - SAM-MPA: Applying SAM to Few-shot Medical Image Segmentation using Mask Propagation and Auto-prompting [6.739803086387235]
Medical image segmentation often faces the challenge of prohibitively expensive annotation costs.
We propose leveraging the Segment Anything Model (SAM), pre-trained on over 1 billion masks.
We develop SAM-MPA, an innovative SAM-based framework for few-shot medical image segmentation.
arXiv Detail & Related papers (2024-11-26T12:12:12Z) - Adaptive Prompt Learning with SAM for Few-shot Scanning Probe Microscope Image Segmentation [11.882111844381098]
Segment Anything Model (SAM) has demonstrated strong performance in image segmentation of natural scene images.
SAM's effectiveness diminishes markedly when applied to specific scientific domains, such as Scanning Probe Microscope (SPM) images.
We propose an Adaptive Prompt Learning with SAM framework tailored for few-shot SPM image segmentation.
arXiv Detail & Related papers (2024-10-16T13:38:01Z) - Bridge the Points: Graph-based Few-shot Segment Anything Semantically [79.1519244940518]
Recent advancements in pre-training techniques have enhanced the capabilities of vision foundation models.
Recent studies extend the SAM to Few-shot Semantic segmentation (FSS)
We propose a simple yet effective approach based on graph analysis.
arXiv Detail & Related papers (2024-10-09T15:02:28Z) - Adapting Segment Anything Model for Unseen Object Instance Segmentation [70.60171342436092]
Unseen Object Instance (UOIS) is crucial for autonomous robots operating in unstructured environments.
We propose UOIS-SAM, a data-efficient solution for the UOIS task.
UOIS-SAM integrates two key components: (i) a Heatmap-based Prompt Generator (HPG) to generate class-agnostic point prompts with precise foreground prediction, and (ii) a Hierarchical Discrimination Network (HDNet) that adapts SAM's mask decoder.
arXiv Detail & Related papers (2024-09-23T19:05:50Z) - One Shot is Enough for Sequential Infrared Small Target Segmentation [9.354927663020586]
Infrared small target sequences exhibit strong similarities between frames and contain rich contextual information.
We propose a one-shot and training-free method that perfectly adapts SAM's zero-shot generalization capability to sequential IRSTS.
Experiments demonstrate that our method requires only one shot to achieve comparable performance to state-of-the-art IRSTS methods.
arXiv Detail & Related papers (2024-08-09T02:36:56Z) - CycleSAM: Few-Shot Surgical Scene Segmentation with Cycle- and Scene-Consistent Feature Matching [2.595014480295933]
CycleSAM is an improved visual prompt learning approach that employs a data-efficient training phase and enforces a series of soft constraints.<n>We find that CycleSAM outperforms existing few-shot SAM approaches by a factor of 2-4x in both 1-shot and 5-shot settings.
arXiv Detail & Related papers (2024-07-09T12:08:07Z) - BLO-SAM: Bi-level Optimization Based Overfitting-Preventing Finetuning
of SAM [37.1263294647351]
We introduce BLO-SAM, which finetunes the Segment Anything Model (SAM) based on bi-level optimization (BLO)
BLO-SAM reduces the risk of overfitting by training the model's weight parameters and the prompt embedding on two separate subsets of the training dataset.
Results demonstrate BLO-SAM's superior performance over various state-of-the-art image semantic segmentation methods.
arXiv Detail & Related papers (2024-02-26T06:36:32Z) - Semantic Attention and Scale Complementary Network for Instance
Segmentation in Remote Sensing Images [54.08240004593062]
We propose an end-to-end multi-category instance segmentation model, which consists of a Semantic Attention (SEA) module and a Scale Complementary Mask Branch (SCMB)
SEA module contains a simple fully convolutional semantic segmentation branch with extra supervision to strengthen the activation of interest instances on the feature map.
SCMB extends the original single mask branch to trident mask branches and introduces complementary mask supervision at different scales.
arXiv Detail & Related papers (2021-07-25T08:53:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.