LMSeg: Language-guided Multi-dataset Segmentation
- URL: http://arxiv.org/abs/2302.13495v1
- Date: Mon, 27 Feb 2023 03:43:03 GMT
- Title: LMSeg: Language-guided Multi-dataset Segmentation
- Authors: Qiang Zhou, Yuang Liu, Chaohui Yu, Jingliang Li, Zhibin Wang, Fan Wang
- Abstract summary: We propose a Language-guided Multi-dataset framework, dubbed LMSeg, which supports both semantic and panoptic segmentation.
LMSeg maps category names to a text embedding space as a unified taxonomy, instead of using inflexible one-hot label.
Experiments demonstrate that our method achieves significant improvements on four semantic and three panoptic segmentation datasets.
- Score: 15.624630978858324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It's a meaningful and attractive topic to build a general and inclusive
segmentation model that can recognize more categories in various scenarios. A
straightforward way is to combine the existing fragmented segmentation datasets
and train a multi-dataset network. However, there are two major issues with
multi-dataset segmentation: (1) the inconsistent taxonomy demands manual
reconciliation to construct a unified taxonomy; (2) the inflexible one-hot
common taxonomy causes time-consuming model retraining and defective
supervision of unlabeled categories. In this paper, we investigate the
multi-dataset segmentation and propose a scalable Language-guided Multi-dataset
Segmentation framework, dubbed LMSeg, which supports both semantic and panoptic
segmentation. Specifically, we introduce a pre-trained text encoder to map the
category names to a text embedding space as a unified taxonomy, instead of
using inflexible one-hot label. The model dynamically aligns the segment
queries with the category embeddings. Instead of relabeling each dataset with
the unified taxonomy, a category-guided decoding module is designed to
dynamically guide predictions to each datasets taxonomy. Furthermore, we adopt
a dataset-aware augmentation strategy that assigns each dataset a specific
image augmentation pipeline, which can suit the properties of images from
different datasets. Extensive experiments demonstrate that our method achieves
significant improvements on four semantic and three panoptic segmentation
datasets, and the ablation study evaluates the effectiveness of each component.
Related papers
- Label Sharing Incremental Learning Framework for Independent Multi-Label Segmentation Tasks [0.0]
In a setting where segmentation models have to be built for multiple datasets, each with its own corresponding label set, a straightforward way is to learn one model for every dataset and its labels.
This work proposes a novel label sharing framework where a shared common label space is constructed and each of the individual label sets are systematically mapped to the common labels.
We experimentally validate our method on various medical image segmentation datasets, each involving multi-label segmentation.
arXiv Detail & Related papers (2024-11-17T15:50:25Z) - TMT-VIS: Taxonomy-aware Multi-dataset Joint Training for Video Instance Segmentation [48.75470418596875]
Training on large-scale datasets can boost the performance of video instance segmentation while the datasets for VIS are hard to scale up due to the high labor cost.
What we possess are numerous isolated filed-specific datasets, thus, it is appealing to jointly train models across the aggregation of datasets to enhance data volume and diversity.
We conduct extensive evaluations on four popular and challenging benchmarks, including YouTube-VIS 2019, YouTube-VIS 2021, OVIS, and UVO.
Our model shows significant improvement over the baseline solutions, and sets new state-of-the-art records on all benchmarks.
arXiv Detail & Related papers (2023-12-11T18:50:09Z) - Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding [95.78002228538841]
We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
arXiv Detail & Related papers (2022-07-18T09:20:04Z) - Automatic universal taxonomies for multi-domain semantic segmentation [1.4364491422470593]
Training semantic segmentation models on multiple datasets has sparked a lot of recent interest in the computer vision community.
established datasets have mutually incompatible labels which disrupt principled inference in the wild.
We address this issue by automatic construction of universal through iterative dataset integration.
arXiv Detail & Related papers (2022-07-18T08:53:17Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - MSeg: A Composite Dataset for Multi-domain Semantic Segmentation [100.17755160696939]
We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains.
We reconcile the generalization and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images.
A model trained on MSeg ranks first on the WildDash-v1 leaderboard for robust semantic segmentation, with no exposure to WildDash data during training.
arXiv Detail & Related papers (2021-12-27T16:16:35Z) - Simple multi-dataset detection [83.9604523643406]
We present a simple method for training a unified detector on multiple large-scale datasets.
We show how to automatically integrate dataset-specific outputs into a common semantic taxonomy.
Our approach does not require manual taxonomy reconciliation.
arXiv Detail & Related papers (2021-02-25T18:55:58Z) - Minimally-Supervised Structure-Rich Text Categorization via Learning on
Text-Rich Networks [61.23408995934415]
We propose a novel framework for minimally supervised categorization by learning from the text-rich network.
Specifically, we jointly train two modules with different inductive biases -- a text analysis module for text understanding and a network learning module for class-discriminative, scalable network learning.
Our experiments show that given only three seed documents per category, our framework can achieve an accuracy of about 92%.
arXiv Detail & Related papers (2021-02-23T04:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.