MSANet: Multi-Similarity and Attention Guidance for Boosting Few-Shot
Segmentation
- URL: http://arxiv.org/abs/2206.09667v1
- Date: Mon, 20 Jun 2022 09:14:17 GMT
- Title: MSANet: Multi-Similarity and Attention Guidance for Boosting Few-Shot
Segmentation
- Authors: Ehtesham Iqbal, Sirojbek Safarov, Seongdeok Bang
- Abstract summary: Few-shot segmentation aims to segment unseen-class objects given only a handful of densely labeled samples.
Prototype learning, where the support feature yields a singleor several prototypes, has been widely used in FSS.
We propose a Multi-Similarity and Attention Network (MSANet) including two novel modules, a multi-similarity module and an attention module.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Few-shot segmentation aims to segment unseen-class objects given only a
handful of densely labeled samples. Prototype learning, where the support
feature yields a singleor several prototypes by averaging global and local
object information, has been widely used in FSS. However, utilizing only
prototype vectors may be insufficient to represent the features for all
training data. To extract abundant features and make more precise predictions,
we propose a Multi-Similarity and Attention Network (MSANet) including two
novel modules, a multi-similarity module and an attention module. The
multi-similarity module exploits multiple feature-maps of support images and
query images to estimate accurate semantic relationships. The attention module
instructs the network to concentrate on class-relevant information. The network
is tested on standard FSS datasets, PASCAL-5i 1-shot, PASCAL-5i 5-shot,
COCO-20i 1-shot, and COCO-20i 5-shot. The MSANet with the backbone of
ResNet-101 achieves the state-of-the-art performance for all 4-benchmark
datasets with mean intersection over union (mIoU) of 69.13%, 73.99%, 51.09%,
56.80%, respectively. Code is available at
https://github.com/AIVResearch/MSANet
Related papers
- Matching Anything by Segmenting Anything [109.2507425045143]
We propose MASA, a novel method for robust instance association learning.
MASA learns instance-level correspondence through exhaustive data transformations.
We show that MASA achieves even better performance than state-of-the-art methods trained with fully annotated in-domain video sequences.
arXiv Detail & Related papers (2024-06-06T16:20:07Z) - Hierarchical Dense Correlation Distillation for Few-Shot
Segmentation-Extended Abstract [47.85056124410376]
Few-shot semantic segmentation (FSS) aims to form class-agnostic models segmenting unseen classes with only a handful of annotations.
We design Hierarchically Decoupled Matching Network (HDMNet) mining pixel-level support correlation based on the transformer architecture.
We propose a matching module to reduce train-set overfitting and introduce correlation distillation leveraging semantic correspondence from coarse resolution to boost fine-grained segmentation.
arXiv Detail & Related papers (2023-06-27T08:10:20Z) - Contrastive Enhancement Using Latent Prototype for Few-Shot Segmentation [8.986743262828009]
Few-shot segmentation enables the model to recognize unseen classes with few annotated examples.
This paper proposes a contrastive enhancement approach using latent prototypes to leverage latent classes.
Our approach remarkably improves the performance of state-of-the-art methods for 1-shot and 5-shot segmentation.
arXiv Detail & Related papers (2022-03-08T14:02:32Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - MFNet: Multi-class Few-shot Segmentation Network with Pixel-wise Metric
Learning [34.059257121606336]
This work focuses on few-shot semantic segmentation, which is still a largely unexplored field.
We first present a novel multi-way encoding and decoding architecture which effectively fuses multi-scale query information and multi-class support information into one query-support embedding.
Experiments on standard benchmarks PASCAL-5i and COCO-20i show clear benefits of our method over the state of the art in few-shot segmentation.
arXiv Detail & Related papers (2021-10-30T11:37:36Z) - Learning Meta-class Memory for Few-Shot Semantic Segmentation [90.28474742651422]
We introduce the concept of meta-class, which is the meta information shareable among all classes.
We propose a novel Meta-class Memory based few-shot segmentation method (MM-Net), where we introduce a set of learnable memory embeddings.
Our proposed MM-Net achieves 37.5% mIoU on the COCO dataset in 1-shot setting, which is 5.1% higher than the previous state-of-the-art.
arXiv Detail & Related papers (2021-08-06T06:29:59Z) - Boosting Few-shot Semantic Segmentation with Transformers [81.43459055197435]
TRansformer-based Few-shot Semantic segmentation method (TRFS)
Our model consists of two modules: Global Enhancement Module (GEM) and Local Enhancement Module (LEM)
arXiv Detail & Related papers (2021-08-04T20:09:21Z) - Deep Gaussian Processes for Few-Shot Segmentation [66.08463078545306]
Few-shot segmentation is a challenging task, requiring the extraction of a generalizable representation from only a few annotated samples.
We propose a few-shot learner formulation based on Gaussian process (GP) regression.
Our approach sets a new state-of-the-art for 5-shot segmentation, with mIoU scores of 68.1 and 49.8 on PASCAL-5i and COCO-20i, respectively.
arXiv Detail & Related papers (2021-03-30T17:56:32Z) - Objectness-Aware Few-Shot Semantic Segmentation [31.13009111054977]
We show how to increase overall model capacity to achieve improved performance.
We introduce objectness, which is class-agnostic and so not prone to overfitting.
Given only one annotated example of an unseen category, experiments show that our method outperforms state-of-art methods with respect to mIoU.
arXiv Detail & Related papers (2020-04-06T19:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.