SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes
- URL: http://arxiv.org/abs/2104.09216v1
- Date: Mon, 19 Apr 2021 11:21:47 GMT
- Title: SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes
- Authors: Jiacheng Chen, Bin-Bin Gao, Zongqing Lu, Jing-Hao Xue, Chengjie Wang,
Qingmin Liao
- Abstract summary: Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
- Score: 56.387647750094466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot semantic segmentation aims to segment novel-class objects in a query
image with only a few annotated examples in support images. Most of advanced
solutions exploit a metric learning framework that performs segmentation
through matching each pixel to a learned foreground prototype. However, this
framework suffers from biased classification due to incomplete construction of
sample pairs with the foreground prototype only. To address this issue, in this
paper, we introduce a complementary self-contrastive task into few-shot
semantic segmentation. Our new model is able to associate the pixels in a
region with the prototype of this region, no matter they are in the foreground
or background. To this end, we generate self-contrastive background prototypes
directly from the query image, with which we enable the construction of
complete sample pairs and thus a complementary and auxiliary segmentation task
to achieve the training of a better segmentation model. Extensive experiments
on PASCAL-5$^i$ and COCO-20$^i$ demonstrate clearly the superiority of our
proposal. At no expense of inference efficiency, our model achieves
state-of-the results in both 1-shot and 5-shot settings for few-shot semantic
segmentation.
Related papers
- Correlation Weighted Prototype-based Self-Supervised One-Shot Segmentation of Medical Images [12.365801596593936]
Medical image segmentation is one of the domains where sufficient annotated data is not available.
We propose a prototype-based self-supervised one-way one-shot learning framework using pseudo-labels generated from superpixels.
We show that the proposed simple but potent framework performs at par with the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-12T15:38:51Z) - Leveraging GAN Priors for Few-Shot Part Segmentation [43.35150430895919]
Few-shot part segmentation aims to separate different parts of an object given only a few samples.
We propose to learn task-specific features in a "pre-training"-"fine-tuning" paradigm.
arXiv Detail & Related papers (2022-07-27T10:17:07Z) - Beyond the Prototype: Divide-and-conquer Proxies for Few-shot
Segmentation [63.910211095033596]
Few-shot segmentation aims to segment unseen-class objects given only a handful of densely labeled samples.
We propose a simple yet versatile framework in the spirit of divide-and-conquer.
Our proposed approach, named divide-and-conquer proxies (DCP), allows for the development of appropriate and reliable information.
arXiv Detail & Related papers (2022-04-21T06:21:14Z) - A Simple Baseline for Zero-shot Semantic Segmentation with Pre-trained
Vision-language Model [61.58071099082296]
It is unclear how to make zero-shot recognition working well on broader vision problems, such as object detection and semantic segmentation.
In this paper, we target for zero-shot semantic segmentation, by building it on an off-the-shelf pre-trained vision-language model, i.e., CLIP.
Our experimental results show that this simple framework surpasses previous state-of-the-arts by a large margin.
arXiv Detail & Related papers (2021-12-29T18:56:18Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Semantically Meaningful Class Prototype Learning for One-Shot Image
Semantic Segmentation [58.96902899546075]
One-shot semantic image segmentation aims to segment the object regions for the novel class with only one annotated image.
Recent works adopt the episodic training strategy to mimic the expected situation at testing time.
We propose to leverage the multi-class label information during the episodic training. It will encourage the network to generate more semantically meaningful features for each category.
arXiv Detail & Related papers (2021-02-22T12:07:35Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.