What a MESS: Multi-Domain Evaluation of Zero-Shot Semantic Segmentation
- URL: http://arxiv.org/abs/2306.15521v3
- Date: Sat, 16 Dec 2023 21:11:07 GMT
- Title: What a MESS: Multi-Domain Evaluation of Zero-Shot Semantic Segmentation
- Authors: Benedikt Blumenstiel, Johannes Jakubik, Hilde K\"uhne and Michael
V\"ossing
- Abstract summary: We build a benchmark for Multi-domain Evaluation of Semantic (MESS)
MESS allows a holistic analysis of performance across a wide range of domain-specific datasets.
We evaluate eight recently published models on the proposed MESS benchmark and analyze characteristics for the performance of zero-shot transfer models.
- Score: 2.7036595757881323
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While semantic segmentation has seen tremendous improvements in the past,
there are still significant labeling efforts necessary and the problem of
limited generalization to classes that have not been present during training.
To address this problem, zero-shot semantic segmentation makes use of large
self-supervised vision-language models, allowing zero-shot transfer to unseen
classes. In this work, we build a benchmark for Multi-domain Evaluation of
Semantic Segmentation (MESS), which allows a holistic analysis of performance
across a wide range of domain-specific datasets such as medicine, engineering,
earth monitoring, biology, and agriculture. To do this, we reviewed 120
datasets, developed a taxonomy, and classified the datasets according to the
developed taxonomy. We select a representative subset consisting of 22 datasets
and propose it as the MESS benchmark. We evaluate eight recently published
models on the proposed MESS benchmark and analyze characteristics for the
performance of zero-shot transfer models. The toolkit is available at
https://github.com/blumenstiel/MESS.
Related papers
- Joint semi-supervised and contrastive learning enables zero-shot domain-adaptation and multi-domain segmentation [1.5393913074555419]
SegCLR is a versatile framework designed to segment volumetric images across different domains.
We demonstrate the superior performance of SegCLR through a comprehensive evaluation.
arXiv Detail & Related papers (2024-05-08T18:10:59Z) - Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image
Segmentation [17.69933345468061]
scarcity has become a major obstacle for training powerful deep-learning models for medical image segmentation.
We introduce a textbfVersatile textbfSemi-supervised framework to exploit more unlabeled data for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2023-11-20T11:35:52Z) - Exploring Open-Vocabulary Semantic Segmentation without Human Labels [76.15862573035565]
We present ZeroSeg, a novel method that leverages the existing pretrained vision-language model (VL) to train semantic segmentation models.
ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image.
Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data.
arXiv Detail & Related papers (2023-06-01T08:47:06Z) - Text2Seg: Remote Sensing Image Semantic Segmentation via Text-Guided Visual Foundation Models [7.452422412106768]
We propose a novel method named Text2Seg for remote sensing semantic segmentation.
It overcomes the dependency on extensive annotations by employing an automatic prompt generation process.
We show that Text2Seg significantly improves zero-shot prediction performance compared to the vanilla SAM model.
arXiv Detail & Related papers (2023-04-20T18:39:41Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - MSeg: A Composite Dataset for Multi-domain Semantic Segmentation [100.17755160696939]
We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains.
We reconcile the generalization and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images.
A model trained on MSeg ranks first on the WildDash-v1 leaderboard for robust semantic segmentation, with no exposure to WildDash data during training.
arXiv Detail & Related papers (2021-12-27T16:16:35Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.