Zero-Shot Semantic Segmentation via Spatial and Multi-Scale Aware Visual
Class Embedding
- URL: http://arxiv.org/abs/2111.15181v1
- Date: Tue, 30 Nov 2021 07:39:19 GMT
- Title: Zero-Shot Semantic Segmentation via Spatial and Multi-Scale Aware Visual
Class Embedding
- Authors: Sungguk Cha and Yooseung Wang
- Abstract summary: We propose a language-model-free zero-shot semantic segmentation framework, Spatial and Multi-scale aware Visual Class Embedding Network (SM-VCENet)
In experiments, our SM-VCENet outperforms zero-shot semantic segmentation state-of-the-art by a relative margin.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fully supervised semantic segmentation technologies bring a paradigm shift in
scene understanding. However, the burden of expensive labeling cost remains as
a challenge. To solve the cost problem, recent studies proposed language model
based zero-shot semantic segmentation (L-ZSSS) approaches. In this paper, we
address L-ZSSS has a limitation in generalization which is a virtue of
zero-shot learning. Tackling the limitation, we propose a language-model-free
zero-shot semantic segmentation framework, Spatial and Multi-scale aware Visual
Class Embedding Network (SM-VCENet). Furthermore, leveraging vision-oriented
class embedding SM-VCENet enriches visual information of the class embedding by
multi-scale attention and spatial attention. We also propose a novel benchmark
(PASCAL2COCO) for zero-shot semantic segmentation, which provides
generalization evaluation by domain adaptation and contains visually
challenging samples. In experiments, our SM-VCENet outperforms zero-shot
semantic segmentation state-of-the-art by a relative margin in PASCAL-5i
benchmark and shows generalization-robustness in PASCAL2COCO benchmark.
Related papers
- Training-Free Semantic Segmentation via LLM-Supervision [37.9007813884699]
This paper introduces a new approach to text-supervised semantic segmentation using supervision by a large language model (LLM)
Our method starts from an LLM to generate a detailed set of subclasses for more accurate class representation.
We then employ an advanced text-supervised semantic segmentation model to apply the generated subclasses as target labels.
arXiv Detail & Related papers (2024-03-31T14:37:25Z) - Language-Driven Visual Consensus for Zero-Shot Semantic Segmentation [114.72734384299476]
We propose a Language-Driven Visual Consensus (LDVC) approach, fostering improved alignment of semantic and visual information.
We leverage class embeddings as anchors due to their discrete and abstract nature, steering vision features toward class embeddings.
Our approach significantly boosts the capacity of segmentation models for unseen classes.
arXiv Detail & Related papers (2024-03-13T11:23:55Z) - Semantic-aware SAM for Point-Prompted Instance Segmentation [29.286913777078116]
In this paper, we introduce a cost-effective category-specific segmenter using Segment Anything (SAM)
To tackle this challenge, we have devised a Semantic-Aware Instance Network (SAPNet) that integrates Multiple Instance Learning (MIL) with matching capability and SAM with point prompts.
SAPNet strategically selects the most representative mask proposals generated by SAM to supervise segmentation, with a specific focus on object category information.
arXiv Detail & Related papers (2023-12-26T05:56:44Z) - Open-Vocabulary Segmentation with Semantic-Assisted Calibration [73.39366775301382]
We study open-vocabulary segmentation (OVS) through calibrating in-vocabulary and domain-biased embedding space with contextual prior of CLIP.
We present a Semantic-assisted CAlibration Network (SCAN) to achieve state-of-the-art performance on open-vocabulary segmentation benchmarks.
arXiv Detail & Related papers (2023-12-07T07:00:09Z) - CLIP Is Also a Good Teacher: A New Learning Framework for Inductive
Zero-shot Semantic Segmentation [6.181169909576527]
Generalized Zero-shot Semantic aims to segment both seen and unseen categories only under the supervision of the seen ones.
Existing methods adopt the large-scale Vision Language Models (VLMs) which obtain outstanding zero-shot performance.
We propose CLIP-ZSS (Zero-shot Semantic), a training framework that enables any image encoder designed for closed-set segmentation applied in zero-shot and open-vocabulary tasks.
arXiv Detail & Related papers (2023-10-03T09:33:47Z) - Towards Realistic Zero-Shot Classification via Self Structural Semantic
Alignment [53.2701026843921]
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification.
In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary.
We propose the Self Structural Semantic Alignment (S3A) framework, which extracts structural semantic information from unlabeled data while simultaneously self-learning.
arXiv Detail & Related papers (2023-08-24T17:56:46Z) - Exploring Open-Vocabulary Semantic Segmentation without Human Labels [76.15862573035565]
We present ZeroSeg, a novel method that leverages the existing pretrained vision-language model (VL) to train semantic segmentation models.
ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image.
Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data.
arXiv Detail & Related papers (2023-06-01T08:47:06Z) - A Simple Baseline for Zero-shot Semantic Segmentation with Pre-trained
Vision-language Model [61.58071099082296]
It is unclear how to make zero-shot recognition working well on broader vision problems, such as object detection and semantic segmentation.
In this paper, we target for zero-shot semantic segmentation, by building it on an off-the-shelf pre-trained vision-language model, i.e., CLIP.
Our experimental results show that this simple framework surpasses previous state-of-the-arts by a large margin.
arXiv Detail & Related papers (2021-12-29T18:56:18Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.