Self-Regularized Prototypical Network for Few-Shot Semantic Segmentation
- URL: http://arxiv.org/abs/2210.16829v1
- Date: Sun, 30 Oct 2022 12:43:07 GMT
- Title: Self-Regularized Prototypical Network for Few-Shot Semantic Segmentation
- Authors: Henghui Ding, Hui Zhang, Xudong Jiang
- Abstract summary: We tackle the few-shot segmentation using a self-regularized network (SRPNet) based on prototype extraction for better utilization of the support information.
A direct yet effective prototype regularization on support set is proposed in SRPNet, in which the generated prototypes are evaluated and regularized on the support set itself.
Our proposed SRPNet achieves new state-of-art performance on 1-shot and 5-shot segmentation benchmarks.
- Score: 31.445316481839335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The deep CNNs in image semantic segmentation typically require a large number
of densely-annotated images for training and have difficulties in generalizing
to unseen object categories. Therefore, few-shot segmentation has been
developed to perform segmentation with just a few annotated examples. In this
work, we tackle the few-shot segmentation using a self-regularized prototypical
network (SRPNet) based on prototype extraction for better utilization of the
support information. The proposed SRPNet extracts class-specific prototype
representations from support images and generates segmentation masks for query
images by a distance metric - the fidelity. A direct yet effective prototype
regularization on support set is proposed in SRPNet, in which the generated
prototypes are evaluated and regularized on the support set itself. The extent
to which the generated prototypes restore the support mask imposes an upper
limit on performance. The performance on the query set should never exceed the
upper limit no matter how complete the knowledge is generalized from support
set to query set. With the specific prototype regularization, SRPNet fully
exploits knowledge from the support and offers high-quality prototypes that are
representative for each semantic class and meanwhile discriminative for
different classes. The query performance is further improved by an iterative
query inference (IQI) module that combines a set of regularized prototypes. Our
proposed SRPNet achieves new state-of-art performance on 1-shot and 5-shot
segmentation benchmarks.
Related papers
- Correlation Weighted Prototype-based Self-Supervised One-Shot Segmentation of Medical Images [12.365801596593936]
Medical image segmentation is one of the domains where sufficient annotated data is not available.
We propose a prototype-based self-supervised one-way one-shot learning framework using pseudo-labels generated from superpixels.
We show that the proposed simple but potent framework performs at par with the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-12T15:38:51Z) - Support-Query Prototype Fusion Network for Few-shot Medical Image Segmentation [7.6695642174485705]
Few-shot learning, which utilizes a small amount of labeled data to generalize to unseen classes, has emerged as a critical research area.
We propose a novel Support-Query Prototype Fusion Network (SQPFNet) to mitigate this drawback.
evaluation results on two public datasets, SABS and CMR, show that SQPFNet achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-05-13T07:31:16Z) - Beyond the Prototype: Divide-and-conquer Proxies for Few-shot
Segmentation [63.910211095033596]
Few-shot segmentation aims to segment unseen-class objects given only a handful of densely labeled samples.
We propose a simple yet versatile framework in the spirit of divide-and-conquer.
Our proposed approach, named divide-and-conquer proxies (DCP), allows for the development of appropriate and reliable information.
arXiv Detail & Related papers (2022-04-21T06:21:14Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Semantically Meaningful Class Prototype Learning for One-Shot Image
Semantic Segmentation [58.96902899546075]
One-shot semantic image segmentation aims to segment the object regions for the novel class with only one annotated image.
Recent works adopt the episodic training strategy to mimic the expected situation at testing time.
We propose to leverage the multi-class label information during the episodic training. It will encourage the network to generate more semantically meaningful features for each category.
arXiv Detail & Related papers (2021-02-22T12:07:35Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.