Rewrite Caption Semantics: Bridging Semantic Gaps for
Language-Supervised Semantic Segmentation
- URL: http://arxiv.org/abs/2309.13505v4
- Date: Thu, 4 Jan 2024 06:46:53 GMT
- Title: Rewrite Caption Semantics: Bridging Semantic Gaps for
Language-Supervised Semantic Segmentation
- Authors: Yun Xing, Jian Kang, Aoran Xiao, Jiahao Nie, Ling Shao, Shijian Lu
- Abstract summary: We propose Concept Curation (CoCu) to bridge the gap between visual and textual semantics in pre-training data.
CoCu achieves superb zero-shot transfer performance and greatly boosts language-supervised segmentation baseline by a large margin.
- Score: 100.81837601210597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-Language Pre-training has demonstrated its remarkable zero-shot
recognition ability and potential to learn generalizable visual representations
from language supervision. Taking a step ahead, language-supervised semantic
segmentation enables spatial localization of textual inputs by learning pixel
grouping solely from image-text pairs. Nevertheless, the state-of-the-art
suffers from clear semantic gaps between visual and textual modality: plenty of
visual concepts appeared in images are missing in their paired captions. Such
semantic misalignment circulates in pre-training, leading to inferior zero-shot
performance in dense predictions due to insufficient visual concepts captured
in textual representations. To close such semantic gap, we propose Concept
Curation (CoCu), a pipeline that leverages CLIP to compensate for the missing
semantics. For each image-text pair, we establish a concept archive that
maintains potential visually-matched concepts with our proposed vision-driven
expansion and text-to-vision-guided ranking. Relevant concepts can thus be
identified via cluster-guided sampling and fed into pre-training, thereby
bridging the gap between visual and textual semantics. Extensive experiments
over a broad suite of 8 segmentation benchmarks show that CoCu achieves superb
zero-shot transfer performance and greatly boosts language-supervised
segmentation baseline by a large margin, suggesting the value of bridging
semantic gap in pre-training data.
Related papers
- Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation [44.008094698200026]
FreeDA is a training-free diffusion-augmented method for open-vocabulary semantic segmentation.
FreeDA achieves state-of-the-art performance on five datasets.
arXiv Detail & Related papers (2024-04-09T18:00:25Z) - Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via
Text-Only Training [14.340740609933437]
We propose a novel zero-shot image captioning framework with text-only training to reduce the modality gap.
In particular, we introduce a subregion feature aggregation to leverage local region information.
We extend our framework to build a zero-shot VQA pipeline, demonstrating its generality.
arXiv Detail & Related papers (2024-01-04T16:43:46Z) - CPSeg: Finer-grained Image Semantic Segmentation via Chain-of-Thought
Language Prompting [8.12405696290333]
CPSeg is a framework designed to augment image segmentation performance by integrating a novel "Chain-of-Thought" process.
We propose a new vision-language dataset, FloodPrompt, which includes images, semantic masks, and corresponding text information.
arXiv Detail & Related papers (2023-10-24T13:32:32Z) - CgT-GAN: CLIP-guided Text GAN for Image Captioning [48.276753091051035]
We propose CLIP-guided text GAN (CgT-GAN) to enable the model to "see" real visual modality.
We use adversarial training to teach CgT-GAN to mimic the phrases of an external text corpus.
CgT-GAN outperforms state-of-the-art methods significantly across all metrics.
arXiv Detail & Related papers (2023-08-23T10:25:37Z) - Vocabulary-free Image Classification [75.38039557783414]
We formalize a novel task, termed as Vocabulary-free Image Classification (VIC)
VIC aims to assign to an input image a class that resides in an unconstrained language-induced semantic space, without the prerequisite of a known vocabulary.
CaSED is a method that exploits a pre-trained vision-language model and an external vision-language database to address VIC in a training-free manner.
arXiv Detail & Related papers (2023-06-01T17:19:43Z) - Fine-Grained Semantically Aligned Vision-Language Pre-Training [151.7372197904064]
Large-scale vision-language pre-training has shown impressive advances in a wide range of downstream tasks.
Existing methods mainly model the cross-modal alignment by the similarity of the global representations of images and texts.
We introduce LO, a fine-grained semantically aLigned visiOn-langUage PrE-training framework, which learns fine-grained semantic alignment from the novel perspective of game-theoretic interactions.
arXiv Detail & Related papers (2022-08-04T07:51:48Z) - CRIS: CLIP-Driven Referring Image Segmentation [71.56466057776086]
We propose an end-to-end CLIP-Driven Referring Image framework (CRIS)
CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment.
Our proposed framework significantly outperforms the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-11-30T07:29:08Z) - FILIP: Fine-grained Interactive Language-Image Pre-Training [106.19474076935363]
Fine-grained Interactive Language-Image Pre-training achieves finer-level alignment through a cross-modal late interaction mechanism.
We construct a new large-scale image-text pair dataset called FILIP300M for pre-training.
Experiments show that FILIP achieves state-of-the-art performance on multiple downstream vision-language tasks.
arXiv Detail & Related papers (2021-11-09T17:15:38Z) - Learning Representations by Predicting Bags of Visual Words [55.332200948110895]
Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data.
Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image descriptions.
arXiv Detail & Related papers (2020-02-27T16:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.