Visual Boundary Knowledge Translation for Foreground Segmentation
- URL: http://arxiv.org/abs/2108.00379v1
- Date: Sun, 1 Aug 2021 07:10:25 GMT
- Title: Visual Boundary Knowledge Translation for Foreground Segmentation
- Authors: Zunlei Feng, Lechao Cheng, Xinchao Wang, Xiang Wang, Yajie Liu,
Xiangtong Du, Mingli Song
- Abstract summary: We make an attempt towards building models that explicitly account for visual boundary knowledge, in hope to reduce the training effort on segmenting unseen categories.
With only tens of labeled samples as guidance, Trans-Net achieves close results on par with fully supervised methods.
- Score: 57.32522585756404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When confronted with objects of unknown types in an image, humans can
effortlessly and precisely tell their visual boundaries. This recognition
mechanism and underlying generalization capability seem to contrast to
state-of-the-art image segmentation networks that rely on large-scale
category-aware annotated training samples. In this paper, we make an attempt
towards building models that explicitly account for visual boundary knowledge,
in hope to reduce the training effort on segmenting unseen categories.
Specifically, we investigate a new task termed as Boundary Knowledge
Translation (BKT). Given a set of fully labeled categories, BKT aims to
translate the visual boundary knowledge learned from the labeled categories, to
a set of novel categories, each of which is provided only a few labeled
samples. To this end, we propose a Translation Segmentation Network
(Trans-Net), which comprises a segmentation network and two boundary
discriminators. The segmentation network, combined with a boundary-aware
self-supervised mechanism, is devised to conduct foreground segmentation, while
the two discriminators work together in an adversarial manner to ensure an
accurate segmentation of the novel categories under light supervision.
Exhaustive experiments demonstrate that, with only tens of labeled samples as
guidance, Trans-Net achieves close results on par with fully supervised
methods.
Related papers
- TAX: Tendency-and-Assignment Explainer for Semantic Segmentation with
Multi-Annotators [31.36818611460614]
Tendency-and-Assignment Explainer (TAX) is designed to offer interpretability at the annotator and assignment levels.
We show that our TAX can be applied to state-of-the-art network architectures with comparable performances.
arXiv Detail & Related papers (2023-02-19T12:40:22Z) - Learning to Detect Semantic Boundaries with Image-level Class Labels [14.932318540666548]
This paper presents the first attempt to learn semantic boundary detection using image-level class labels as supervision.
Our method starts by estimating coarse areas of object classes through attentions drawn by an image classification network.
We design a new neural network architecture that can learn to estimate semantic boundaries reliably even with uncertain supervision.
arXiv Detail & Related papers (2022-12-15T01:56:22Z) - Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding [95.78002228538841]
We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
arXiv Detail & Related papers (2022-07-18T09:20:04Z) - A Survey on Label-efficient Deep Segmentation: Bridging the Gap between
Weak Supervision and Dense Prediction [115.9169213834476]
This paper offers a comprehensive review on label-efficient segmentation methods.
We first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels.
Next, we summarize the existing label-efficient segmentation methods from a unified perspective.
arXiv Detail & Related papers (2022-07-04T06:21:01Z) - Boundary Knowledge Translation based Reference Semantic Segmentation [62.60078935335371]
We introduce a Reference Reference segmentation Network (Ref-Net) to conduct visual boundary knowledge translation.
Inspired by the human recognition mechanism, RSMTM is devised only to segment the same category objects based on the features of the reference objects.
With tens of finely-grained annotated samples as guidance, Ref-Net achieves results on par with fully supervised methods on six datasets.
arXiv Detail & Related papers (2021-08-01T07:40:09Z) - Novel Visual Category Discovery with Dual Ranking Statistics and Mutual
Knowledge Distillation [16.357091285395285]
We tackle the problem of grouping unlabelled images from new classes into different semantic partitions.
This is a more realistic and challenging setting than conventional semi-supervised learning.
We propose a two-branch learning framework for this problem, with one branch focusing on local part-level information and the other branch focusing on overall characteristics.
arXiv Detail & Related papers (2021-07-07T17:14:40Z) - Weakly-Supervised Semantic Segmentation via Sub-category Exploration [73.03956876752868]
We propose a simple yet effective approach to enforce the network to pay attention to other parts of an object.
Specifically, we perform clustering on image features to generate pseudo sub-categories labels within each annotated parent class.
We conduct extensive analysis to validate the proposed method and show that our approach performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2020-08-03T20:48:31Z) - Commonality-Parsing Network across Shape and Appearance for Partially
Supervised Instance Segmentation [71.59275788106622]
We propose to learn the underlying class-agnostic commonalities that can be generalized from mask-annotated categories to novel categories.
Our model significantly outperforms the state-of-the-art methods on both partially supervised setting and few-shot setting for instance segmentation on COCO dataset.
arXiv Detail & Related papers (2020-07-24T07:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.