From colouring-in to pointillism: revisiting semantic segmentation
supervision
- URL: http://arxiv.org/abs/2210.14142v1
- Date: Tue, 25 Oct 2022 16:42:03 GMT
- Title: From colouring-in to pointillism: revisiting semantic segmentation
supervision
- Authors: Rodrigo Benenson and Vittorio Ferrari
- Abstract summary: We propose a pointillist approach for semantic segmentation annotation, where only point-wise yes/no questions are answered.
We collected and released 22.6M point labels over 4,171 classes on the Open Images dataset.
- Score: 48.637031591058175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prevailing paradigm for producing semantic segmentation training data
relies on densely labelling each pixel of each image in the training set, akin
to colouring-in books. This approach becomes a bottleneck when scaling up in
the number of images, classes, and annotators. Here we propose instead a
pointillist approach for semantic segmentation annotation, where only
point-wise yes/no questions are answered. We explore design alternatives for
such an active learning approach, measure the speed and consistency of human
annotators on this task, show that this strategy enables training good
segmentation models, and that it is suitable for evaluating models at test
time. As concrete proof of the scalability of our method, we collected and
released 22.6M point labels over 4,171 classes on the Open Images dataset. Our
results enable to rethink the semantic segmentation pipeline of annotation,
training, and evaluation from a pointillism point of view.
Related papers
- Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Shatter and Gather: Learning Referring Image Segmentation with Text
Supervision [52.46081425504072]
We present a new model that discovers semantic entities in input image and then combines such entities relevant to text query to predict the mask of the referent.
Our method was evaluated on four public benchmarks for referring image segmentation, where it clearly outperformed the existing method for the same task and recent open-vocabulary segmentation models on all the benchmarks.
arXiv Detail & Related papers (2023-08-29T15:39:15Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Rethinking Interactive Image Segmentation: Feature Space Annotation [68.8204255655161]
We propose interactive and simultaneous segment annotation from multiple images guided by feature space projection.
We show that our approach can surpass the accuracy of state-of-the-art methods in foreground segmentation datasets.
arXiv Detail & Related papers (2021-01-12T10:13:35Z) - Deep Active Learning for Joint Classification & Segmentation with Weak
Annotator [22.271760669551817]
CNN visualization and interpretation methods, like class-activation maps (CAMs), are typically used to highlight the image regions linked to class predictions.
We propose an active learning framework, which progressively integrates pixel-level annotations during training.
Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of-the-art CAMs and AL methods.
arXiv Detail & Related papers (2020-10-10T03:25:54Z) - Few-Shot Semantic Segmentation Augmented with Image-Level Weak
Annotations [23.02986307143718]
Recent progress in fewshot semantic segmentation tackles the issue by only a few pixel-level annotated examples.
Our key idea is to learn a better prototype representation of the class by fusing the knowledge from the image-level labeled data.
We propose a new framework, called PAIA, to learn the class prototype representation in a metric space by integrating image-level annotations.
arXiv Detail & Related papers (2020-07-03T04:58:20Z) - Self-Supervised Tuning for Few-Shot Segmentation [82.32143982269892]
Few-shot segmentation aims at assigning a category label to each image pixel with few annotated samples.
Existing meta-learning method tends to fail in generating category-specifically discriminative descriptor when the visual features extracted from support images are marginalized in embedding space.
This paper presents an adaptive framework tuning, in which the distribution of latent features across different episodes is dynamically adjusted based on a self-segmentation scheme.
arXiv Detail & Related papers (2020-04-12T03:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.