A Closer Look at Self-training for Zero-Label Semantic Segmentation
- URL: http://arxiv.org/abs/2104.11692v1
- Date: Wed, 21 Apr 2021 14:34:33 GMT
- Title: A Closer Look at Self-training for Zero-Label Semantic Segmentation
- Authors: Giuseppe Pastore, Fabio Cermelli, Yongqin Xian, Massimiliano Mancini,
Zeynep Akata, Barbara Caputo
- Abstract summary: Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
- Score: 53.4488444382874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being able to segment unseen classes not observed during training is an
important technical challenge in deep learning, because of its potential to
reduce the expensive annotation required for semantic segmentation. Prior
zero-label semantic segmentation works approach this task by learning
visual-semantic embeddings or generative models. However, they are prone to
overfitting on the seen classes because there is no training signal for them.
In this paper, we study the challenging generalized zero-label semantic
segmentation task where the model has to segment both seen and unseen classes
at test time. We assume that pixels of unseen classes could be present in the
training images but without being annotated. Our idea is to capture the latent
information on unseen classes by supervising the model with self-produced
pseudo-labels for unlabeled pixels. We propose a consistency regularizer to
filter out noisy pseudo-labels by taking the intersections of the pseudo-labels
generated from different augmentations of the same image. Our framework
generates pseudo-labels and then retrain the model with human-annotated and
pseudo-labelled data. This procedure is repeated for several iterations. As a
result, our approach achieves the new state-of-the-art on PascalVOC12 and
COCO-stuff datasets in the challenging generalized zero-label semantic
segmentation setting, surpassing other existing methods addressing this task
with more complex strategies.
Related papers
- Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets [51.74296438621836]
We introduce Scribbles for All, a label and training data generation algorithm for semantic segmentation trained on scribble labels.
The main limitation of scribbles as source for weak supervision is the lack of challenging datasets for scribble segmentation.
Scribbles for All provides scribble labels for several popular segmentation datasets and provides an algorithm to automatically generate scribble labels for any dataset with dense annotations.
arXiv Detail & Related papers (2024-08-22T15:29:08Z) - Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - Dataset Diffusion: Diffusion-based Synthetic Dataset Generation for
Pixel-Level Semantic Segmentation [6.82236459614491]
We propose a novel method for generating pixel-level semantic segmentation labels using the text-to-image generative model Stable Diffusion.
By utilizing the text prompts, cross-attention, and self-attention of SD, we introduce three new techniques: class-prompt appending, class-prompt cross-attention, and self-attention exponentiation.
These techniques enable us to generate segmentation maps corresponding to synthetic images.
arXiv Detail & Related papers (2023-09-25T17:19:26Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - RaSP: Relation-aware Semantic Prior for Weakly Supervised Incremental
Segmentation [28.02204928717511]
We propose a weakly supervised approach to transfer objectness prior from the previously learned classes into the new ones.
We show how even a simple pairwise interaction between classes can significantly improve the segmentation mask quality of both old and new classes.
arXiv Detail & Related papers (2023-05-31T14:14:21Z) - Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly
Supervised Semantic Segmentation [30.812323329239614]
Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation.
Most existing methods rely on Class Activation Maps (CAM) to derive pixel-level pseudo-labels.
We introduce a simple yet effective method harnessing the Segment Anything Model (SAM), a class-agnostic foundation model capable of producing fine-grained instance masks of objects, parts, and subparts.
arXiv Detail & Related papers (2023-05-09T23:24:09Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Incremental Learning in Semantic Segmentation from Image Labels [18.404068463921426]
Existing semantic segmentation approaches achieve impressive results, but struggle to update their models incrementally as new categories are uncovered.
This paper proposes a novel framework for Weakly Incremental Learning for Semantics, that aims at learning to segment new classes from cheap and largely available image-level labels.
As opposed to existing approaches, that need to generate pseudo-labels offline, we use an auxiliary classifier, trained with image-level labels and regularized by the segmentation model, to obtain pseudo-supervision online and update the model incrementally.
arXiv Detail & Related papers (2021-12-03T12:47:12Z) - Automatically Discovering and Learning New Visual Categories with
Ranking Statistics [145.89790963544314]
We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes.
We learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data.
We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.
arXiv Detail & Related papers (2020-02-13T18:53:32Z) - Discovering Latent Classes for Semi-Supervised Semantic Segmentation [18.5909667833129]
This paper studies the problem of semi-supervised semantic segmentation.
We learn latent classes consistent with semantic classes on labeled images.
We show that the proposed method achieves state of the art results for semi-supervised semantic segmentation.
arXiv Detail & Related papers (2019-12-30T14:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.