SPENet: Self-guided Prototype Enhancement Network for Few-shot Medical Image Segmentation
- URL: http://arxiv.org/abs/2509.02993v1
- Date: Wed, 03 Sep 2025 03:59:27 GMT
- Title: SPENet: Self-guided Prototype Enhancement Network for Few-shot Medical Image Segmentation
- Authors: Chao Fan, Xibin Jia, Anqi Xiao, Hongyuan Yu, Zhenghan Yang, Dawei Yang, Hui Xu, Yan Huang, Liang Wang,
- Abstract summary: Few-Shot Medical Image (FSMIS) aims to segment novel classes of medical objects using only a few labeled images.<n>Prototype-based methods have made significant progress in addressing FSMIS.<n>We propose a Self-guided Prototype Enhancement Network (SPENet)
- Score: 22.774602971340098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-Shot Medical Image Segmentation (FSMIS) aims to segment novel classes of medical objects using only a few labeled images. Prototype-based methods have made significant progress in addressing FSMIS. However, they typically generate a single global prototype for the support image to match with the query image, overlooking intra-class variations. To address this issue, we propose a Self-guided Prototype Enhancement Network (SPENet). Specifically, we introduce a Multi-level Prototype Generation (MPG) module, which enables multi-granularity measurement between the support and query images by simultaneously generating a global prototype and an adaptive number of local prototypes. Additionally, we observe that not all local prototypes in the support image are beneficial for matching, especially when there are substantial discrepancies between the support and query images. To alleviate this issue, we propose a Query-guided Local Prototype Enhancement (QLPE) module, which adaptively refines support prototypes by incorporating guidance from the query image, thus mitigating the negative effects of such discrepancies. Extensive experiments on three public medical datasets demonstrate that SPENet outperforms existing state-of-the-art methods, achieving superior performance.
Related papers
- Divide, Conquer and Unite: Hierarchical Style-Recalibrated Prototype Alignment for Federated Medical Image Segmentation [66.82598255715696]
Federated learning enables multiple medical institutions to train a global model without sharing data.<n>Current approaches primarily focus on final-layer features, overlooking critical multi-level cues.<n>We propose FedBCS to bridge feature representation gaps via domain-invariant contextual prototypes alignment.
arXiv Detail & Related papers (2025-11-14T04:15:34Z) - Concentrate on Weakness: Mining Hard Prototypes for Few-Shot Medical Image Segmentation [17.638595740284636]
Few-Shot Medical Image (FSMIS) has been widely used to train a model that can perform segmentation from only a few annotated images.<n>We propose to focus more attention to those weaker features that are crucial for clear segmentation boundary.
arXiv Detail & Related papers (2025-05-28T02:22:05Z) - Mind the Gap Between Prototypes and Images in Cross-domain Finetuning [64.97317635355124]
We propose a contrastive prototype-image adaptation (CoPA) to adapt different transformations respectively for prototypes and images.
Experiments on Meta-Dataset demonstrate that CoPA achieves the state-of-the-art performance more efficiently.
arXiv Detail & Related papers (2024-10-16T11:42:11Z) - Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation [49.5901368256326]
We propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) in segmenting medical images.
Our DAPSAM achieves state-of-the-art performance on two medical image segmentation tasks with different modalities.
arXiv Detail & Related papers (2024-09-19T07:28:33Z) - Correlation Weighted Prototype-based Self-Supervised One-Shot Segmentation of Medical Images [12.365801596593936]
Medical image segmentation is one of the domains where sufficient annotated data is not available.
We propose a prototype-based self-supervised one-way one-shot learning framework using pseudo-labels generated from superpixels.
We show that the proposed simple but potent framework performs at par with the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-12T15:38:51Z) - Support-Query Prototype Fusion Network for Few-shot Medical Image Segmentation [7.6695642174485705]
Few-shot learning, which utilizes a small amount of labeled data to generalize to unseen classes, has emerged as a critical research area.
We propose a novel Support-Query Prototype Fusion Network (SQPFNet) to mitigate this drawback.
evaluation results on two public datasets, SABS and CMR, show that SQPFNet achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-05-13T07:31:16Z) - Query-guided Prototype Evolution Network for Few-Shot Segmentation [85.75516116674771]
We present a new method that integrates query features into the generation process of foreground and background prototypes.<n> Experimental results on the PASCAL-$5i$ and mirroring-$20i$ datasets attest to the substantial enhancements achieved by QPENet.
arXiv Detail & Related papers (2024-03-11T07:50:40Z) - Partition-A-Medical-Image: Extracting Multiple Representative
Sub-regions for Few-shot Medical Image Segmentation [23.926487942901872]
Few-shot Medical Image (FSMIS) is a more promising solution for medical image segmentation tasks.
We present an approach to extract multiple representative sub-regions from a given support medical image.
We then introduce a novel Prototypical Representation Debiasing (PRD) module based on a two-way elimination mechanism.
arXiv Detail & Related papers (2023-09-20T09:31:57Z) - Few-Shot Medical Image Segmentation via a Region-enhanced Prototypical
Transformer [20.115149216170327]
Region-enhanced Prototypical Transformer (RPT) is a few-shot learning-based method to mitigate the effects of large intra-class diversity/bias.
By stacking BaT blocks, the proposed RPT can iteratively optimize generated regional prototypes and finally produce rectified and more accurate global prototypes.
arXiv Detail & Related papers (2023-09-09T15:39:38Z) - Holistic Prototype Attention Network for Few-Shot VOS [74.25124421163542]
Few-shot video object segmentation (FSVOS) aims to segment dynamic objects of unseen classes by resorting to a small set of support images.
We propose a holistic prototype attention network (HPAN) for advancing FSVOS.
arXiv Detail & Related papers (2023-07-16T03:48:57Z) - Prototype Mixture Models for Few-shot Semantic Segmentation [50.866870384596446]
Few-shot segmentation is challenging because objects within the support and query images could significantly differ in appearance and pose.
We propose prototype mixture models (PMMs), which correlate diverse image regions with multiple prototypes to enforce the prototype-based semantic representation.
PMMs improve 5-shot segmentation performance on MS-COCO by up to 5.82% with only a moderate cost for model size and inference speed.
arXiv Detail & Related papers (2020-08-10T04:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.