Pseudo Pixel-level Labeling for Images with Evolving Content
- URL: http://arxiv.org/abs/2105.09975v1
- Date: Thu, 20 May 2021 18:14:19 GMT
- Title: Pseudo Pixel-level Labeling for Images with Evolving Content
- Authors: Sara Mousavi, Zhenning Yang, Kelley Cross, Dawnie Steadman, Audris
Mockus
- Abstract summary: We propose a pseudo-pixel-level label generation technique to reduce the amount of effort for manual annotation of images.
We train two semantic segmentation models with VGG and ResNet backbones on images labeled using our pseudo labeling method and those of a state-of-the-art method.
The results indicate that using our pseudo-labels instead of those generated using the state-of-the-art method in the training process improves the mean-IoU and the frequency-weighted-IoU of the VGG and ResNet-based semantic segmentation models by 3.36%, 2.58%, 10
- Score: 5.573543601558405
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Annotating images for semantic segmentation requires intense manual labor and
is a time-consuming and expensive task especially for domains with a scarcity
of experts, such as Forensic Anthropology. We leverage the evolving nature of
images depicting the decay process in human decomposition data to design a
simple yet effective pseudo-pixel-level label generation technique to reduce
the amount of effort for manual annotation of such images. We first identify
sequences of images with a minimum variation that are most suitable to share
the same or similar annotation using an unsupervised approach. Given one
user-annotated image in each sequence, we propagate the annotation to the
remaining images in the sequence by merging it with annotations produced by a
state-of-the-art CAM-based pseudo label generation technique. To evaluate the
quality of our pseudo-pixel-level labels, we train two semantic segmentation
models with VGG and ResNet backbones on images labeled using our pseudo
labeling method and those of a state-of-the-art method. The results indicate
that using our pseudo-labels instead of those generated using the
state-of-the-art method in the training process improves the mean-IoU and the
frequency-weighted-IoU of the VGG and ResNet-based semantic segmentation models
by 3.36%, 2.58%, 10.39%, and 12.91% respectively.
Related papers
- Distilling Self-Supervised Vision Transformers for Weakly-Supervised
Few-Shot Classification & Segmentation [58.03255076119459]
We address the task of weakly-supervised few-shot image classification and segmentation, by leveraging a Vision Transformer (ViT)
Our proposed method takes token representations from the self-supervised ViT and leverages their correlations, via self-attention, to produce classification and segmentation predictions.
Experiments on Pascal-5i and COCO-20i demonstrate significant performance gains in a variety of supervision settings.
arXiv Detail & Related papers (2023-07-07T06:16:43Z) - Learning to Annotate Part Segmentation with Gradient Matching [58.100715754135685]
This paper focuses on tackling semi-supervised part segmentation tasks by generating high-quality images with a pre-trained GAN.
In particular, we formulate the annotator learning as a learning-to-learn problem.
We show that our method can learn annotators from a broad range of labelled images including real images, generated images, and even analytically rendered images.
arXiv Detail & Related papers (2022-11-06T01:29:22Z) - Dual-Perspective Semantic-Aware Representation Blending for Multi-Label
Image Recognition with Partial Labels [70.36722026729859]
We propose a dual-perspective semantic-aware representation blending (DSRB) that blends multi-granularity category-specific semantic representation across different images.
The proposed DS consistently outperforms current state-of-the-art algorithms on all proportion label settings.
arXiv Detail & Related papers (2022-05-26T00:33:44Z) - Reference-guided Pseudo-Label Generation for Medical Semantic
Segmentation [25.76014072179711]
We propose a novel approach to generate supervision for semi-supervised semantic segmentation.
We use a small number of labeled images as reference material and match pixels in an unlabeled image to the semantics of the best fitting pixel in a reference set.
We achieve the same performance as a standard fully supervised model on X-ray anatomy segmentation, albeit 95% fewer labeled images.
arXiv Detail & Related papers (2021-12-01T12:21:24Z) - Self-supervised Product Quantization for Deep Unsupervised Image
Retrieval [21.99902461562925]
Supervised deep learning-based hash and vector quantization are enabling fast and large-scale image retrieval systems.
We propose the first deep unsupervised image retrieval method dubbed Self-supervised Product Quantization (SPQ) network, which is label-free and trained in a self-supervised manner.
Our method analyzes the image contents to extract descriptive features, allowing us to understand image representations for accurate retrieval.
arXiv Detail & Related papers (2021-09-06T05:02:34Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Automatic Image Labelling at Pixel Level [21.59653873040243]
We propose an interesting learning approach to generate pixel-level image labellings automatically.
A Guided Filter Network (GFN) is first developed to learn the segmentation knowledge from a source domain.
GFN then transfers such segmentation knowledge to generate coarse object masks in the target domain.
arXiv Detail & Related papers (2020-07-15T00:34:11Z) - RGB-based Semantic Segmentation Using Self-Supervised Depth Pre-Training [77.62171090230986]
We propose an easily scalable and self-supervised technique that can be used to pre-train any semantic RGB segmentation method.
In particular, our pre-training approach makes use of automatically generated labels that can be obtained using depth sensors.
We show how our proposed self-supervised pre-training with HN-labels can be used to replace ImageNet pre-training.
arXiv Detail & Related papers (2020-02-06T11:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.