All you need are a few pixels: semantic segmentation with PixelPick
- URL: http://arxiv.org/abs/2104.06394v2
- Date: Thu, 15 Apr 2021 17:04:20 GMT
- Title: All you need are a few pixels: semantic segmentation with PixelPick
- Authors: Gyungin Shin, Weidi Xie, Samuel Albanie
- Abstract summary: In this work, we show that in order to achieve a good level of segmentation performance, all you need are a few well-chosen pixel labels.
We demonstrate how to exploit this phenomena within an active learning framework, termed PixelPick, to radically reduce labelling cost.
- Score: 30.234492042103966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central challenge for the task of semantic segmentation is the prohibitive
cost of obtaining dense pixel-level annotations to supervise model training. In
this work, we show that in order to achieve a good level of segmentation
performance, all you need are a few well-chosen pixel labels. We make the
following contributions: (i) We investigate the novel semantic segmentation
setting in which labels are supplied only at sparse pixel locations, and show
that deep neural networks can use a handful of such labels to good effect; (ii)
We demonstrate how to exploit this phenomena within an active learning
framework, termed PixelPick, to radically reduce labelling cost, and propose an
efficient "mouse-free" annotation strategy to implement our approach; (iii) We
conduct extensive experiments to study the influence of annotation diversity
under a fixed budget, model pretraining, model capacity and the sampling
mechanism for picking pixels in this low annotation regime; (iv) We provide
comparisons to the existing state of the art in semantic segmentation with
active learning, and demonstrate comparable performance with up to two orders
of magnitude fewer pixel annotations on the CamVid, Cityscapes and PASCAL VOC
2012 benchmarks; (v) Finally, we evaluate the efficiency of our annotation
pipeline and its sensitivity to annotator error to demonstrate its
practicality.
Related papers
- Learning Camouflaged Object Detection from Noisy Pseudo Label [60.9005578956798]
This paper introduces the first weakly semi-supervised Camouflaged Object Detection (COD) method.
It aims for budget-efficient and high-precision camouflaged object segmentation with an extremely limited number of fully labeled images.
We propose a noise correction loss that facilitates the model's learning of correct pixels in the early learning stage.
When using only 20% of fully labeled data, our method shows superior performance over the state-of-the-art methods.
arXiv Detail & Related papers (2024-07-18T04:53:51Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Pointly-Supervised Panoptic Segmentation [106.68888377104886]
We propose a new approach to applying point-level annotations for weakly-supervised panoptic segmentation.
Instead of the dense pixel-level labels used by fully supervised methods, point-level labels only provide a single point for each target as supervision.
We formulate the problem in an end-to-end framework by simultaneously generating panoptic pseudo-masks from point-level labels and learning from them.
arXiv Detail & Related papers (2022-10-25T12:03:51Z) - Incremental Learning in Semantic Segmentation from Image Labels [18.404068463921426]
Existing semantic segmentation approaches achieve impressive results, but struggle to update their models incrementally as new categories are uncovered.
This paper proposes a novel framework for Weakly Incremental Learning for Semantics, that aims at learning to segment new classes from cheap and largely available image-level labels.
As opposed to existing approaches, that need to generate pseudo-labels offline, we use an auxiliary classifier, trained with image-level labels and regularized by the segmentation model, to obtain pseudo-supervision online and update the model incrementally.
arXiv Detail & Related papers (2021-12-03T12:47:12Z) - A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic
Segmentation [40.27705176115985]
Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest.
We propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels.
Our proposed learning model can be viewed as a pixel-level meta-learner.
arXiv Detail & Related papers (2021-11-02T08:28:11Z) - Superpixel-guided Iterative Learning from Noisy Labels for Medical Image
Segmentation [24.557755528031453]
We develop a robust iterative learning strategy that combines noise-aware training of segmentation network and noisy label refinement.
Experiments on two benchmarks show that our method outperforms recent state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-21T14:27:36Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Few-Shot Semantic Segmentation Augmented with Image-Level Weak
Annotations [23.02986307143718]
Recent progress in fewshot semantic segmentation tackles the issue by only a few pixel-level annotated examples.
Our key idea is to learn a better prototype representation of the class by fusing the knowledge from the image-level labeled data.
We propose a new framework, called PAIA, to learn the class prototype representation in a metric space by integrating image-level annotations.
arXiv Detail & Related papers (2020-07-03T04:58:20Z) - Self-Supervised Tuning for Few-Shot Segmentation [82.32143982269892]
Few-shot segmentation aims at assigning a category label to each image pixel with few annotated samples.
Existing meta-learning method tends to fail in generating category-specifically discriminative descriptor when the visual features extracted from support images are marginalized in embedding space.
This paper presents an adaptive framework tuning, in which the distribution of latent features across different episodes is dynamically adjusted based on a self-segmentation scheme.
arXiv Detail & Related papers (2020-04-12T03:53:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.