Prototype Refinement Network for Few-Shot Segmentation
- URL: http://arxiv.org/abs/2002.03579v2
- Date: Sat, 9 May 2020 07:17:59 GMT
- Title: Prototype Refinement Network for Few-Shot Segmentation
- Authors: Jinlu Liu and Yongqiang Qin
- Abstract summary: We propose a Prototype Refinement Network (PRNet) to attack the challenge of few-shot segmentation.
It firstly learns to bidirectionally extract prototypes from both support and query images of the known classes.
PRNet significantly outperforms existing methods by a large margin of 13.1% in 1-shot setting.
- Score: 6.777019450570474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot segmentation targets to segment new classes with few annotated
images provided. It is more challenging than traditional semantic segmentation
tasks that segment known classes with abundant annotated images. In this paper,
we propose a Prototype Refinement Network (PRNet) to attack the challenge of
few-shot segmentation. It firstly learns to bidirectionally extract prototypes
from both support and query images of the known classes. Furthermore, to
extract representative prototypes of the new classes, we use adaptation and
fusion for prototype refinement. The step of adaptation makes the model to
learn new concepts which is directly implemented by retraining. Prototype
fusion is firstly proposed which fuses support prototypes with query
prototypes, incorporating the knowledge from both sides. It is effective in
prototype refinement without importing extra learnable parameters. In this way,
the prototypes become more discriminative in low-data regimes. Experiments on
PASAL-$5^i$ and COCO-$20^i$ demonstrate the superiority of our method.
Especially on COCO-$20^i$, PRNet significantly outperforms existing methods by
a large margin of 13.1\% in 1-shot setting.
Related papers
- PrototypeFormer: Learning to Explore Prototype Relationships for
Few-shot Image Classification [19.93681871684493]
We propose our method called PrototypeFormer, which aims to significantly advance traditional few-shot image classification approaches.
We utilize a transformer architecture to build a prototype extraction module, aiming to extract class representations that are more discriminative for few-shot classification.
Despite its simplicity, the method performs remarkably well, with no bells and whistles.
arXiv Detail & Related papers (2023-10-05T12:56:34Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Interpretable Image Classification with Differentiable Prototypes
Assignment [7.660883761395447]
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared by the classes.
It is obtained by introducing a fully differentiable assignment of prototypes to particular classes.
We show that ProtoPool obtains state-of-the-art accuracy on the CUB-200-2011 and the Stanford Cars datasets, substantially reducing the number of prototypes.
arXiv Detail & Related papers (2021-12-06T10:03:32Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Learning Class-level Prototypes for Few-shot Learning [24.65076873131432]
Few-shot learning aims to recognize new categories using very few labeled samples.
We propose a framework for few-shot classification, which can learn to generate preferable prototypes from few support data.
arXiv Detail & Related papers (2021-08-25T06:33:52Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Adaptive Prototype Learning and Allocation for Few-Shot Segmentation [45.74646894293767]
We propose two novel modules, named superpixel-guided clustering (SGC) and guided prototype allocation (GPA), for multiple prototype extraction and allocation.
SGC is a parameter-free and training-free approach, which extracts more representative prototypes by aggregating similar feature vectors.
GPA is able to select matched prototypes to provide more accurate guidance.
By integrating the SGC and GPA together, we propose the Adaptive Superpixel-guided Network (ASGNet), which is a lightweight model and adapts to object scale and shape variation.
arXiv Detail & Related papers (2021-04-05T13:10:50Z) - Semantically Meaningful Class Prototype Learning for One-Shot Image
Semantic Segmentation [58.96902899546075]
One-shot semantic image segmentation aims to segment the object regions for the novel class with only one annotated image.
Recent works adopt the episodic training strategy to mimic the expected situation at testing time.
We propose to leverage the multi-class label information during the episodic training. It will encourage the network to generate more semantically meaningful features for each category.
arXiv Detail & Related papers (2021-02-22T12:07:35Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.