Anti-aliasing Semantic Reconstruction for Few-Shot Semantic Segmentation
- URL: http://arxiv.org/abs/2106.00184v1
- Date: Tue, 1 Jun 2021 02:17:36 GMT
- Title: Anti-aliasing Semantic Reconstruction for Few-Shot Semantic Segmentation
- Authors: Binghao Liu and Yao Ding and Jianbin Jiao and Xiangyang Ji and Qixiang
Ye
- Abstract summary: We reformulate few-shot segmentation as a semantic reconstruction problem.
We convert base class features into a series of basis vectors which span a class-level semantic space for novel class reconstruction.
Our proposed approach, referred to as anti-aliasing semantic reconstruction (ASR), provides a systematic yet interpretable solution for few-shot learning problems.
- Score: 66.85202434812942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Encouraging progress in few-shot semantic segmentation has been made by
leveraging features learned upon base classes with sufficient training data to
represent novel classes with few-shot examples. However, this feature sharing
mechanism inevitably causes semantic aliasing between novel classes when they
have similar compositions of semantic concepts. In this paper, we reformulate
few-shot segmentation as a semantic reconstruction problem, and convert base
class features into a series of basis vectors which span a class-level semantic
space for novel class reconstruction. By introducing contrastive loss, we
maximize the orthogonality of basis vectors while minimizing semantic aliasing
between classes. Within the reconstructed representation space, we further
suppress interference from other classes by projecting query features to the
support vector for precise semantic activation. Our proposed approach, referred
to as anti-aliasing semantic reconstruction (ASR), provides a systematic yet
interpretable solution for few-shot learning problems. Extensive experiments on
PASCAL VOC and MS COCO datasets show that ASR achieves strong results compared
with the prior works.
Related papers
- Semantic Enhanced Few-shot Object Detection [37.715912401900745]
We propose a fine-tuning based FSOD framework that utilizes semantic embeddings for better detection.
Our method allows each novel class to construct a compact feature space without being confused with similar base classes.
arXiv Detail & Related papers (2024-06-19T12:40:55Z) - Learning to Detour: Shortcut Mitigating Augmentation for Weakly Supervised Semantic Segmentation [7.5856806269316825]
Weakly supervised semantic segmentation (WSSS) employing weak forms of labels has been actively studied to alleviate the annotation cost of acquiring pixel-level labels.
We propose shortcut mitigating augmentation (SMA) for WSSS, which generates synthetic representations of object-background combinations not seen in the training data to reduce the use of shortcut features.
arXiv Detail & Related papers (2024-05-28T13:07:35Z) - Spatial Semantic Recurrent Mining for Referring Image Segmentation [63.34997546393106]
We propose Stextsuperscript2RM to achieve high-quality cross-modality fusion.
It follows a working strategy of trilogy: distributing language feature, spatial semantic recurrent coparsing, and parsed-semantic balancing.
Our proposed method performs favorably against other state-of-the-art algorithms.
arXiv Detail & Related papers (2024-05-15T00:17:48Z) - Reflection Invariance Learning for Few-shot Semantic Segmentation [53.20466630330429]
Few-shot semantic segmentation (FSS) aims to segment objects of unseen classes in query images with only a few annotated support images.
This paper proposes a fresh few-shot segmentation framework to mine the reflection invariance in a multi-view matching manner.
Experiments on both PASCAL-$5textiti$ and COCO-$20textiti$ datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-01T15:14:58Z) - Advancing Incremental Few-shot Semantic Segmentation via Semantic-guided
Relation Alignment and Adaptation [98.51938442785179]
Incremental few-shot semantic segmentation aims to incrementally extend a semantic segmentation model to novel classes.
This task faces a severe semantic-aliasing issue between base and novel classes due to data imbalance.
We propose the Semantic-guided Relation Alignment and Adaptation (SRAA) method that fully considers the guidance of prior semantic information.
arXiv Detail & Related papers (2023-05-18T10:40:52Z) - Semantics-Aware Dynamic Localization and Refinement for Referring Image
Segmentation [102.25240608024063]
Referring image segments an image from a language expression.
We develop an algorithm that shifts from being localization-centric to segmentation-language.
Compared to its counterparts, our method is more versatile yet effective.
arXiv Detail & Related papers (2023-03-11T08:42:40Z) - Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic
Segmentation [40.09476732999614]
Mask proposal models have significantly improved the performance of zero-shot semantic segmentation.
The use of a background' embedding during training in these methods is problematic as the resulting model tends to over-learn and assign all unseen classes as the background class instead of their correct labels.
This paper proposes novel class enhancement losses to bypass the use of the background embbedding during training, and simultaneously exploit the semantic relationship between text embeddings and mask proposals by ranking the similarity scores.
arXiv Detail & Related papers (2023-01-18T06:55:02Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.