Self-Guided and Cross-Guided Learning for Few-Shot Segmentation
- URL: http://arxiv.org/abs/2103.16129v1
- Date: Tue, 30 Mar 2021 07:36:41 GMT
- Title: Self-Guided and Cross-Guided Learning for Few-Shot Segmentation
- Authors: Bingfeng Zhang, Jimin Xiao and Terry Qin
- Abstract summary: We propose a self-guided learning approach for few-shot segmentation.
By making an initial prediction for the annotated support image, the covered and uncovered foreground regions are encoded to the primary and auxiliary support vectors.
By aggregating both primary and auxiliary support vectors, better segmentation performances are obtained on query images.
- Score: 12.899804391102435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot segmentation has been attracting a lot of attention due to its
effectiveness to segment unseen object classes with a few annotated samples.
Most existing approaches use masked Global Average Pooling (GAP) to encode an
annotated support image to a feature vector to facilitate query image
segmentation. However, this pipeline unavoidably loses some discriminative
information due to the average operation. In this paper, we propose a simple
but effective self-guided learning approach, where the lost critical
information is mined. Specifically, through making an initial prediction for
the annotated support image, the covered and uncovered foreground regions are
encoded to the primary and auxiliary support vectors using masked GAP,
respectively. By aggregating both primary and auxiliary support vectors, better
segmentation performances are obtained on query images. Enlightened by our
self-guided module for 1-shot segmentation, we propose a cross-guided module
for multiple shot segmentation, where the final mask is fused using predictions
from multiple annotated samples with high-quality support vectors contributing
more and vice versa. This module improves the final prediction in the inference
stage without re-training. Extensive experiments show that our approach
achieves new state-of-the-art performances on both PASCAL-5i and COCO-20i
datasets.
Related papers
- Boosting Few-Shot Segmentation via Instance-Aware Data Augmentation and
Local Consensus Guided Cross Attention [7.939095881813804]
Few-shot segmentation aims to train a segmentation model that can fast adapt to a novel task for which only a few annotated images are provided.
We introduce an instance-aware data augmentation (IDA) strategy that augments the support images based on the relative sizes of the target objects.
The proposed IDA effectively increases the support set's diversity and promotes the distribution consistency between support and query images.
arXiv Detail & Related papers (2024-01-18T10:29:10Z) - Self-supervised Few-shot Learning for Semantic Segmentation: An
Annotation-free Approach [4.855689194518905]
Few-shot semantic segmentation (FSS) offers immense potential in the field of medical image analysis.
Existing FSS techniques heavily rely on annotated semantic classes, rendering them unsuitable for medical images.
We propose a novel self-supervised FSS framework that does not rely on any annotation. Instead, it adaptively estimates the query mask by leveraging the eigenvectors obtained from the support images.
arXiv Detail & Related papers (2023-07-26T18:33:30Z) - Reflection Invariance Learning for Few-shot Semantic Segmentation [53.20466630330429]
Few-shot semantic segmentation (FSS) aims to segment objects of unseen classes in query images with only a few annotated support images.
This paper proposes a fresh few-shot segmentation framework to mine the reflection invariance in a multi-view matching manner.
Experiments on both PASCAL-$5textiti$ and COCO-$20textiti$ datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-01T15:14:58Z) - Progressively Dual Prior Guided Few-shot Semantic Segmentation [57.37506990980975]
Few-shot semantic segmentation task aims at performing segmentation in query images with a few annotated support samples.
We propose a progressively dual prior guided few-shot semantic segmentation network.
arXiv Detail & Related papers (2022-11-20T16:19:47Z) - Beyond the Prototype: Divide-and-conquer Proxies for Few-shot
Segmentation [63.910211095033596]
Few-shot segmentation aims to segment unseen-class objects given only a handful of densely labeled samples.
We propose a simple yet versatile framework in the spirit of divide-and-conquer.
Our proposed approach, named divide-and-conquer proxies (DCP), allows for the development of appropriate and reliable information.
arXiv Detail & Related papers (2022-04-21T06:21:14Z) - Boosting Few-shot Semantic Segmentation with Transformers [81.43459055197435]
TRansformer-based Few-shot Semantic segmentation method (TRFS)
Our model consists of two modules: Global Enhancement Module (GEM) and Local Enhancement Module (LEM)
arXiv Detail & Related papers (2021-08-04T20:09:21Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Self-Supervised Tuning for Few-Shot Segmentation [82.32143982269892]
Few-shot segmentation aims at assigning a category label to each image pixel with few annotated samples.
Existing meta-learning method tends to fail in generating category-specifically discriminative descriptor when the visual features extracted from support images are marginalized in embedding space.
This paper presents an adaptive framework tuning, in which the distribution of latent features across different episodes is dynamically adjusted based on a self-segmentation scheme.
arXiv Detail & Related papers (2020-04-12T03:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.