Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding
- URL: http://arxiv.org/abs/2207.08455v2
- Date: Tue, 19 Jul 2022 14:43:47 GMT
- Title: Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding
- Authors: Quande Liu, Youpeng Wen, Jianhua Han, Chunjing Xu, Hang Xu, Xiaodan
Liang
- Abstract summary: We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
- Score: 95.78002228538841
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To bridge the gap between supervised semantic segmentation and real-world
applications that acquires one model to recognize arbitrary new concepts,
recent zero-shot segmentation attracts a lot of attention by exploring the
relationships between unseen and seen object categories, yet requiring large
amounts of densely-annotated data with diverse base classes. In this paper, we
propose a new open-world semantic segmentation pipeline that makes the first
attempt to learn to segment semantic objects of various open-world categories
without any efforts on dense annotations, by purely exploiting the
image-caption data that naturally exist on the Internet. Our method,
Vision-language-driven Semantic Segmentation (ViL-Seg), employs an image and a
text encoder to generate visual and text embeddings for the image-caption data,
with two core components that endow its segmentation ability: First, the image
encoder is jointly trained with a vision-based contrasting and a cross-modal
contrasting, which encourage the visual embeddings to preserve both
fine-grained semantics and high-level category information that are crucial for
the segmentation task. Furthermore, an online clustering head is devised over
the image encoder, which allows to dynamically segment the visual embeddings
into distinct semantic groups such that they can be classified by comparing
with various text embeddings to complete our segmentation pipeline. Experiments
show that without using any data with dense annotations, our method can
directly segment objects of arbitrary categories, outperforming zero-shot
segmentation methods that require data labeling on three benchmark datasets.
Related papers
- USE: Universal Segment Embeddings for Open-Vocabulary Image Segmentation [33.11010205890195]
The main challenge in open-vocabulary image segmentation now lies in accurately classifying these segments into text-defined categories.
We introduce the Universal Segment Embedding (USE) framework to address this challenge.
This framework is comprised of two key components: 1) a data pipeline designed to efficiently curate a large amount of segment-text pairs at various granularities, and 2) a universal segment embedding model that enables precise segment classification into a vast range of text-defined categories.
arXiv Detail & Related papers (2024-06-07T21:41:18Z) - A Lightweight Clustering Framework for Unsupervised Semantic
Segmentation [28.907274978550493]
Unsupervised semantic segmentation aims to categorize each pixel in an image into a corresponding class without the use of annotated data.
We propose a lightweight clustering framework for unsupervised semantic segmentation.
Our framework achieves state-of-the-art results on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-11-30T15:33:42Z) - Exploring Open-Vocabulary Semantic Segmentation without Human Labels [76.15862573035565]
We present ZeroSeg, a novel method that leverages the existing pretrained vision-language model (VL) to train semantic segmentation models.
ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image.
Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data.
arXiv Detail & Related papers (2023-06-01T08:47:06Z) - Segment Everything Everywhere All at Once [124.90835636901096]
We present SEEM, a promptable and interactive model for segmenting everything everywhere all at once in an image.
We propose a novel decoding mechanism that enables diverse prompting for all types of segmentation tasks.
We conduct a comprehensive empirical study to validate the effectiveness of SEEM across diverse segmentation tasks.
arXiv Detail & Related papers (2023-04-13T17:59:40Z) - ViewCo: Discovering Text-Supervised Segmentation Masks via Multi-View
Semantic Consistency [126.88107868670767]
We propose multi-textbfView textbfConsistent learning (ViewCo) for text-supervised semantic segmentation.
We first propose text-to-views consistency modeling to learn correspondence for multiple views of the same input image.
We also propose cross-view segmentation consistency modeling to address the ambiguity issue of text supervision.
arXiv Detail & Related papers (2023-01-31T01:57:52Z) - Visual Semantic Segmentation Based on Few/Zero-Shot Learning: An
Overview [47.10687511208189]
This paper focuses on the recently published few/zero-shot visual semantic segmentation methods varying from 2D to 3D space.
Three typical instantiations are involved to uncover the interactions of few/zero-shot learning with visual semantic segmentation.
arXiv Detail & Related papers (2022-11-13T13:39:33Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic
Segmentation [44.75300205362518]
Unsupervised semantic segmentation aims to obtain high-level semantic representation on low-level visual features without manual annotations.
We propose the first top-down unsupervised semantic segmentation framework for fine-grained segmentation in extremely complicated scenarios.
Our results show that our top-down unsupervised segmentation is robust to both object-centric and scene-centric datasets.
arXiv Detail & Related papers (2021-12-02T18:59:03Z) - Visual Boundary Knowledge Translation for Foreground Segmentation [57.32522585756404]
We make an attempt towards building models that explicitly account for visual boundary knowledge, in hope to reduce the training effort on segmenting unseen categories.
With only tens of labeled samples as guidance, Trans-Net achieves close results on par with fully supervised methods.
arXiv Detail & Related papers (2021-08-01T07:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.