Adaptive Prototype Learning and Allocation for Few-Shot Segmentation
- URL: http://arxiv.org/abs/2104.01893v1
- Date: Mon, 5 Apr 2021 13:10:50 GMT
- Title: Adaptive Prototype Learning and Allocation for Few-Shot Segmentation
- Authors: Gen Li, Varun Jampani, Laura Sevilla-Lara, Deqing Sun, Jonghyun Kim,
Joongkyu Kim
- Abstract summary: We propose two novel modules, named superpixel-guided clustering (SGC) and guided prototype allocation (GPA), for multiple prototype extraction and allocation.
SGC is a parameter-free and training-free approach, which extracts more representative prototypes by aggregating similar feature vectors.
GPA is able to select matched prototypes to provide more accurate guidance.
By integrating the SGC and GPA together, we propose the Adaptive Superpixel-guided Network (ASGNet), which is a lightweight model and adapts to object scale and shape variation.
- Score: 45.74646894293767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prototype learning is extensively used for few-shot segmentation. Typically,
a single prototype is obtained from the support feature by averaging the global
object information. However, using one prototype to represent all the
information may lead to ambiguities. In this paper, we propose two novel
modules, named superpixel-guided clustering (SGC) and guided prototype
allocation (GPA), for multiple prototype extraction and allocation.
Specifically, SGC is a parameter-free and training-free approach, which
extracts more representative prototypes by aggregating similar feature vectors,
while GPA is able to select matched prototypes to provide more accurate
guidance. By integrating the SGC and GPA together, we propose the Adaptive
Superpixel-guided Network (ASGNet), which is a lightweight model and adapts to
object scale and shape variation. In addition, our network can easily
generalize to k-shot segmentation with substantial improvement and no
additional computational cost. In particular, our evaluations on COCO
demonstrate that ASGNet surpasses the state-of-the-art method by 5% in 5-shot
segmentation.
Related papers
- Self-Regularized Prototypical Network for Few-Shot Semantic Segmentation [31.445316481839335]
We tackle the few-shot segmentation using a self-regularized network (SRPNet) based on prototype extraction for better utilization of the support information.
A direct yet effective prototype regularization on support set is proposed in SRPNet, in which the generated prototypes are evaluated and regularized on the support set itself.
Our proposed SRPNet achieves new state-of-art performance on 1-shot and 5-shot segmentation benchmarks.
arXiv Detail & Related papers (2022-10-30T12:43:07Z) - Few-Shot Segmentation via Rich Prototype Generation and Recurrent
Prediction Enhancement [12.614578133091168]
We propose a rich prototype generation module (RPGM) and a recurrent prediction enhancement module (RPEM) to reinforce the prototype learning paradigm.
RPGM combines superpixel and K-means clustering to generate rich prototype features with complementary scale relationships.
RPEM utilizes the recurrent mechanism to design a round-way propagation decoder.
arXiv Detail & Related papers (2022-10-03T08:46:52Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Deep Gaussian Processes for Few-Shot Segmentation [66.08463078545306]
Few-shot segmentation is a challenging task, requiring the extraction of a generalizable representation from only a few annotated samples.
We propose a few-shot learner formulation based on Gaussian process (GP) regression.
Our approach sets a new state-of-the-art for 5-shot segmentation, with mIoU scores of 68.1 and 49.8 on PASCAL-5i and COCO-20i, respectively.
arXiv Detail & Related papers (2021-03-30T17:56:32Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z) - Prototype Refinement Network for Few-Shot Segmentation [6.777019450570474]
We propose a Prototype Refinement Network (PRNet) to attack the challenge of few-shot segmentation.
It firstly learns to bidirectionally extract prototypes from both support and query images of the known classes.
PRNet significantly outperforms existing methods by a large margin of 13.1% in 1-shot setting.
arXiv Detail & Related papers (2020-02-10T07:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.