Discovering Latent Classes for Semi-Supervised Semantic Segmentation
- URL: http://arxiv.org/abs/1912.12936v4
- Date: Mon, 8 Mar 2021 21:16:53 GMT
- Title: Discovering Latent Classes for Semi-Supervised Semantic Segmentation
- Authors: Olga Zatsarynna, Johann Sawatzky, Juergen Gall
- Abstract summary: This paper studies the problem of semi-supervised semantic segmentation.
We learn latent classes consistent with semantic classes on labeled images.
We show that the proposed method achieves state of the art results for semi-supervised semantic segmentation.
- Score: 18.5909667833129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High annotation costs are a major bottleneck for the training of semantic
segmentation systems. Therefore, methods working with less annotation effort
are of special interest. This paper studies the problem of semi-supervised
semantic segmentation. This means that only a small subset of the training
images is annotated while the other training images do not contain any
annotation. In order to leverage the information present in the unlabeled
images, we propose to learn a second task that is related to semantic
segmentation but easier. On labeled images, we learn latent classes consistent
with semantic classes so that the variety of semantic classes assigned to a
latent class is as low as possible. On unlabeled images, we predict a
probability map for latent classes and use it as a supervision signal to learn
semantic segmentation. The latent classes, as well as the semantic classes, are
simultaneously predicted by a two-branch network. In our experiments on Pascal
VOC and Cityscapes, we show that the latent classes learned this way have an
intuitive meaning and that the proposed method achieves state of the art
results for semi-supervised semantic segmentation.
Related papers
- SemiVL: Semi-Supervised Semantic Segmentation with Vision-Language
Guidance [97.00445262074595]
In SemiVL, we propose to integrate rich priors from vision-language models into semi-supervised semantic segmentation.
We design a language-guided decoder to jointly reason over vision and language.
We evaluate SemiVL on 4 semantic segmentation datasets, where it significantly outperforms previous semi-supervised methods.
arXiv Detail & Related papers (2023-11-27T19:00:06Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - ISLE: A Framework for Image Level Semantic Segmentation Ensemble [5.137284292672375]
Conventional semantic segmentation networks require massive pixel-wise annotated labels to reach state-of-the-art prediction quality.
We propose ISLE, which employs an ensemble of the "pseudo-labels" for a given set of different semantic segmentation techniques on a class-wise level.
We reach up to 2.4% improvement over ISLE's individual components.
arXiv Detail & Related papers (2023-03-14T13:36:36Z) - Semantic Segmentation In-the-Wild Without Seeing Any Segmentation
Examples [34.97652735163338]
We propose a novel approach for creating semantic segmentation masks for every object.
Our method takes as input the image-level labels of the class categories present in the image.
The output of this stage provides pixel-level pseudo-labels, instead of the manual pixel-level labels required by supervised methods.
arXiv Detail & Related papers (2021-12-06T17:32:38Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Semantically Meaningful Class Prototype Learning for One-Shot Image
Semantic Segmentation [58.96902899546075]
One-shot semantic image segmentation aims to segment the object regions for the novel class with only one annotated image.
Recent works adopt the episodic training strategy to mimic the expected situation at testing time.
We propose to leverage the multi-class label information during the episodic training. It will encourage the network to generate more semantically meaningful features for each category.
arXiv Detail & Related papers (2021-02-22T12:07:35Z) - PCAMs: Weakly Supervised Semantic Segmentation Using Point Supervision [12.284208932393073]
This paper presents a novel procedure for producing semantic segmentation from images given some point level annotations.
We propose training a CNN that is normally fully supervised using our pseudo labels in place of ground truth labels.
Our method achieves state of the art results for point supervised semantic segmentation on the PASCAL VOC 2012 dataset citeeveringham2010pascal, even outperforming state of the art methods for stronger bounding box and squiggle supervision.
arXiv Detail & Related papers (2020-07-10T21:25:27Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.