Automatic Image Labelling at Pixel Level
- URL: http://arxiv.org/abs/2007.07415v2
- Date: Mon, 20 Jul 2020 03:17:32 GMT
- Title: Automatic Image Labelling at Pixel Level
- Authors: Xiang Zhang, Wei Zhang, Jinye Peng, Jianping Fan
- Abstract summary: We propose an interesting learning approach to generate pixel-level image labellings automatically.
A Guided Filter Network (GFN) is first developed to learn the segmentation knowledge from a source domain.
GFN then transfers such segmentation knowledge to generate coarse object masks in the target domain.
- Score: 21.59653873040243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of deep networks for semantic image segmentation largely
depends on the availability of large-scale training images which are labelled
at the pixel level. Typically, such pixel-level image labellings are obtained
manually by a labour-intensive process. To alleviate the burden of manual image
labelling, we propose an interesting learning approach to generate pixel-level
image labellings automatically. A Guided Filter Network (GFN) is first
developed to learn the segmentation knowledge from a source domain, and such
GFN then transfers such segmentation knowledge to generate coarse object masks
in the target domain. Such coarse object masks are treated as pseudo labels and
they are further integrated to optimize/refine the GFN iteratively in the
target domain. Our experiments on six image sets have demonstrated that our
proposed approach can generate fine-grained object masks (i.e., pixel-level
object labellings), whose quality is very comparable to the manually-labelled
ones. Our proposed approach can also achieve better performance on semantic
image segmentation than most existing weakly-supervised approaches.
Related papers
- Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels [53.8817160001038]
We propose a novel method, PixelCLIP, to adapt the CLIP image encoder for pixel-level understanding.
To address the challenges of leveraging masks without semantic labels, we devise an online clustering algorithm.
PixelCLIP shows significant performance improvements over CLIP and competitive results compared to caption-supervised methods.
arXiv Detail & Related papers (2024-09-30T01:13:03Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Unified Mask Embedding and Correspondence Learning for Self-Supervised
Video Segmentation [76.40565872257709]
We develop a unified framework which simultaneously models cross-frame dense correspondence for locally discriminative feature learning.
It is able to directly learn to perform mask-guided sequential segmentation from unlabeled videos.
Our algorithm sets state-of-the-arts on two standard benchmarks (i.e., DAVIS17 and YouTube-VOS)
arXiv Detail & Related papers (2023-03-17T16:23:36Z) - ReFit: A Framework for Refinement of Weakly Supervised Semantic
Segmentation using Object Border Fitting for Medical Images [4.945138408504987]
Weakly Supervised Semantic (WSSS) relying only on image-level supervision is a promising approach to deal with the need for networks.
We propose our novel ReFit framework, which deploys state-of-the-art class activation maps combined with various post-processing techniques.
By applying our method to WSSS predictions, we achieved up to 10% improvement over the current state-of-the-art WSSS methods for medical imaging.
arXiv Detail & Related papers (2023-03-14T12:46:52Z) - CoupAlign: Coupling Word-Pixel with Sentence-Mask Alignments for
Referring Image Segmentation [104.5033800500497]
Referring image segmentation aims at localizing all pixels of the visual objects described by a natural language sentence.
Previous works learn to straightforwardly align the sentence embedding and pixel-level embedding for highlighting the referred objects.
We propose CoupAlign, a simple yet effective multi-level visual-semantic alignment method.
arXiv Detail & Related papers (2022-12-04T08:53:42Z) - From Explanations to Segmentation: Using Explainable AI for Image
Segmentation [1.8581514902689347]
We build upon the advances of the Explainable AI (XAI) community and extract a pixel-wise binary segmentation.
We show that we achieve similar results compared to an established U-Net segmentation architecture.
The proposed method can be trained in a weakly supervised fashion, as the training samples must be only labeled on image-level.
arXiv Detail & Related papers (2022-02-01T10:26:10Z) - Aerial Scene Parsing: From Tile-level Scene Classification to Pixel-wise
Semantic Labeling [48.30060717413166]
Given an aerial image, aerial scene parsing (ASP) targets to interpret the semantic structure of the image content by assigning a semantic label to every pixel of the image.
We present a large-scale scene classification dataset that contains one million aerial images termed Million-AID.
We also report benchmarking experiments using classical convolutional neural networks (CNNs) to achieve pixel-wise semantic labeling.
arXiv Detail & Related papers (2022-01-06T07:40:47Z) - GANSeg: Learning to Segment by Unsupervised Hierarchical Image
Generation [16.900404701997502]
We propose a GAN-based approach that generates images conditioned on latent masks.
We show that such mask-conditioned image generation can be learned faithfully when conditioning the masks in a hierarchical manner.
It also lets us generate image-mask pairs for training a segmentation network, which outperforms the state-of-the-art unsupervised segmentation methods on established benchmarks.
arXiv Detail & Related papers (2021-12-02T07:57:56Z) - Maximize the Exploration of Congeneric Semantics for Weakly Supervised
Semantic Segmentation [27.155133686127474]
We construct a graph neural network (P-GNN) based on the self-detected patches from different images that contain the same class labels.
We conduct experiments on the popular PASCAL VOC 2012 benchmarks, and our model yields state-of-the-art performance.
arXiv Detail & Related papers (2021-10-08T08:59:16Z) - Pseudo Pixel-level Labeling for Images with Evolving Content [5.573543601558405]
We propose a pseudo-pixel-level label generation technique to reduce the amount of effort for manual annotation of images.
We train two semantic segmentation models with VGG and ResNet backbones on images labeled using our pseudo labeling method and those of a state-of-the-art method.
The results indicate that using our pseudo-labels instead of those generated using the state-of-the-art method in the training process improves the mean-IoU and the frequency-weighted-IoU of the VGG and ResNet-based semantic segmentation models by 3.36%, 2.58%, 10
arXiv Detail & Related papers (2021-05-20T18:14:19Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.