Points2Polygons: Context-Based Segmentation from Weak Labels Using
Adversarial Networks
- URL: http://arxiv.org/abs/2106.02804v1
- Date: Sat, 5 Jun 2021 05:17:45 GMT
- Title: Points2Polygons: Context-Based Segmentation from Weak Labels Using
Adversarial Networks
- Authors: Kuai Yu, Hakeem Frank, Daniel Wilson
- Abstract summary: In applied image segmentation tasks, the ability to provide numerous and precise labels for training is paramount to the accuracy of the model at inference time.
This overhead is often neglected, and recently proposed segmentation architectures rely heavily on the availability and fidelity of ground truth labels to achieve state-of-the-art accuracies.
We introduce Points2Polygons (P2P), a model which makes use of contextual metric learning techniques that directly addresses this problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In applied image segmentation tasks, the ability to provide numerous and
precise labels for training is paramount to the accuracy of the model at
inference time. However, this overhead is often neglected, and recently
proposed segmentation architectures rely heavily on the availability and
fidelity of ground truth labels to achieve state-of-the-art accuracies. Failure
to acknowledge the difficulty in creating adequate ground truths can lead to an
over-reliance on pre-trained models or a lack of adoption in real-world
applications. We introduce Points2Polygons (P2P), a model which makes use of
contextual metric learning techniques that directly addresses this problem.
Points2Polygons performs well against existing fully-supervised segmentation
baselines with limited training data, despite using lightweight segmentation
models (U-Net with a ResNet18 backbone) and having access to only weak labels
in the form of object centroids and no pre-training. We demonstrate this on
several different small but non-trivial datasets. We show that metric learning
using contextual data provides key insights for self-supervised tasks in
general, and allow segmentation models to easily generalize across
traditionally label-intensive domains in computer vision.
Related papers
- UNIT: Unsupervised Online Instance Segmentation through Time [69.2787246878521]
We tackle the problem of class-agnostic unsupervised online instance segmentation and tracking.
We propose a new training recipe that enables the online tracking of objects.
Our network is trained on pseudo-labels, eliminating the need for manual annotations.
arXiv Detail & Related papers (2024-09-12T09:47:45Z) - Physically Feasible Semantic Segmentation [58.17907376475596]
State-of-the-art semantic segmentation models are typically optimized in a data-driven fashion.
Our method, Physically Feasible Semantic (PhyFea), extracts explicit physical constraints that govern spatial class relations.
PhyFea yields significant performance improvements in mIoU over each state-of-the-art network we use.
arXiv Detail & Related papers (2024-08-26T22:39:08Z) - ContextSeg: Sketch Semantic Segmentation by Querying the Context with Attention [7.783971241874388]
This paper presents ContextSeg - a simple yet highly effective approach to tackling this problem with two stages.
In the first stage, to better encode the shape and positional information of strokes, we propose to predict an extra dense distance field in an autoencoder network.
In the second stage, we treat an entire stroke as a single entity and label a group of strokes within the same semantic part using an auto-regressive Transformer with the default attention mechanism.
arXiv Detail & Related papers (2023-11-28T10:53:55Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - A Pixel-Level Meta-Learner for Weakly Supervised Few-Shot Semantic
Segmentation [40.27705176115985]
Few-shot semantic segmentation addresses the learning task in which only few images with ground truth pixel-level labels are available for the novel classes of interest.
We propose a novel meta-learning framework, which predicts pseudo pixel-level segmentation masks from a limited amount of data and their semantic labels.
Our proposed learning model can be viewed as a pixel-level meta-learner.
arXiv Detail & Related papers (2021-11-02T08:28:11Z) - Stateless actor-critic for instance segmentation with high-level priors [3.752550648610726]
Instance segmentation is an important computer vision problem which remains challenging due to deep learning-based methods.
We formulate the instance segmentation problem as graph partitioning and the actor critic predicts the edge weights driven by the rewards, which are based on the conformity of segmented instances to high-level priors on object shape, position or size.
Experiments on toy and real datasets demonstrate that we can achieve excellent performance without any direct supervision based only on a rich set of priors.
arXiv Detail & Related papers (2021-07-06T13:20:14Z) - Streaming Self-Training via Domain-Agnostic Unlabeled Images [62.57647373581592]
We present streaming self-training (SST) that aims to democratize the process of learning visual recognition models.
Key to SST are two crucial observations: (1) domain-agnostic unlabeled images enable us to learn better models with a few labeled examples without any additional knowledge or supervision; and (2) learning is a continuous process and can be done by constructing a schedule of learning updates.
arXiv Detail & Related papers (2021-04-07T17:58:39Z) - Weakly Supervised Deep Nuclei Segmentation Using Partial Points
Annotation in Histopathology Images [51.893494939675314]
We propose a novel weakly supervised segmentation framework based on partial points annotation.
We show that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods.
arXiv Detail & Related papers (2020-07-10T15:41:29Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z) - Reinforced active learning for image segmentation [34.096237671643145]
We present a new active learning strategy for semantic segmentation based on deep reinforcement learning (RL)
An agent learns a policy to select a subset of small informative image regions -- opposed to entire images -- to be labeled from a pool of unlabeled data.
Our method proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems.
arXiv Detail & Related papers (2020-02-16T14:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.