GOSS: Towards Generalized Open-set Semantic Segmentation
- URL: http://arxiv.org/abs/2203.12116v1
- Date: Wed, 23 Mar 2022 01:03:06 GMT
- Title: GOSS: Towards Generalized Open-set Semantic Segmentation
- Authors: Jie Hong, Weihao Li, Junlin Han, Jiyang Zheng, Pengfei Fang, Mehrtash
Harandi and Lars Petersson
- Abstract summary: We present and study a new image segmentation task, called Generalized Open-set Semantic (GOSS)
GOSS classifies pixels as belonging to known classes, and clusters (or groups) of pixels of unknown class are labelled as such.
We believe our new GOSS task can produce an expressive image understanding for future research.
- Score: 38.00665104750745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present and study a new image segmentation task, called
Generalized Open-set Semantic Segmentation (GOSS). Previously, with the
well-known open-set semantic segmentation (OSS), the intelligent agent only
detects the unknown regions without further processing, limiting their
perception of the environment. It stands to reason that a further analysis of
the detected unknown pixels would be beneficial. Therefore, we propose GOSS,
which unifies the abilities of two well-defined segmentation tasks, OSS and
generic segmentation (GS), in a holistic way. Specifically, GOSS classifies
pixels as belonging to known classes, and clusters (or groups) of pixels of
unknown class are labelled as such. To evaluate this new expanded task, we
further propose a metric which balances the pixel classification and clustering
aspects. Moreover, we build benchmark tests on top of existing datasets and
propose a simple neural architecture as a baseline, which jointly predicts
pixel classification and clustering under open-set settings. Our experiments on
multiple benchmarks demonstrate the effectiveness of our baseline. We believe
our new GOSS task can produce an expressive image understanding for future
research. Code will be made available.
Related papers
- A Lightweight Clustering Framework for Unsupervised Semantic
Segmentation [28.907274978550493]
Unsupervised semantic segmentation aims to categorize each pixel in an image into a corresponding class without the use of annotated data.
We propose a lightweight clustering framework for unsupervised semantic segmentation.
Our framework achieves state-of-the-art results on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-11-30T15:33:42Z) - Generalized Category Discovery in Semantic Segmentation [43.99230778597973]
This paper explores a novel setting called Generalized Category Discovery in Semantic (GCDSS)
GCDSS aims to segment unlabeled images given prior knowledge from a labeled set of base classes.
In contrast to Novel Category Discovery in Semantic (NCDSS), there is no prerequisite for prior knowledge mandating the existence of at least one novel class in each unlabeled image.
arXiv Detail & Related papers (2023-11-20T04:11:16Z) - Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - Deep Hierarchical Semantic Segmentation [76.40565872257709]
hierarchical semantic segmentation (HSS) aims at structured, pixel-wise description of visual observation in terms of a class hierarchy.
HSSN casts HSS as a pixel-wise multi-label classification task, only bringing minimal architecture change to current segmentation models.
With hierarchy-induced margin constraints, HSSN reshapes the pixel embedding space, so as to generate well-structured pixel representations.
arXiv Detail & Related papers (2022-03-27T15:47:44Z) - Maximize the Exploration of Congeneric Semantics for Weakly Supervised
Semantic Segmentation [27.155133686127474]
We construct a graph neural network (P-GNN) based on the self-detected patches from different images that contain the same class labels.
We conduct experiments on the popular PASCAL VOC 2012 benchmarks, and our model yields state-of-the-art performance.
arXiv Detail & Related papers (2021-10-08T08:59:16Z) - GP-S3Net: Graph-based Panoptic Sparse Semantic Segmentation Network [1.9949920338542213]
GP-S3Net is a proposal-free approach in which no object proposals are needed to identify the objects.
Our new design consists of a novel instance-level network to process the semantic results.
Extensive experiments demonstrate that GP-S3Net outperforms the current state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-18T21:49:58Z) - KRADA: Known-region-aware Domain Alignment for Open World Semantic
Segmentation [64.03817806316903]
In semantic segmentation, we aim to train a pixel-level classifier to assign category labels to all pixels in an image.
In an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.
We propose an end-to-end learning framework, known-region-aware domain alignment (KRADA), to distinguish unknown classes while aligning distributions of known classes in labeled and unlabeled open-world images.
arXiv Detail & Related papers (2021-06-11T08:43:59Z) - Exemplar-Based Open-Set Panoptic Segmentation Network [79.99748041746592]
We extend panoptic segmentation to the open-world and introduce an open-set panoptic segmentation (OPS) task.
We investigate the practical challenges of the task and construct a benchmark on top of an existing dataset, COCO.
We propose a novel exemplar-based open-set panoptic segmentation network (EOPSN) inspired by exemplar theory.
arXiv Detail & Related papers (2021-05-18T07:59:21Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.