Segment Anything is A Good Pseudo-label Generator for Weakly Supervised
Semantic Segmentation
- URL: http://arxiv.org/abs/2305.01275v1
- Date: Tue, 2 May 2023 09:22:38 GMT
- Title: Segment Anything is A Good Pseudo-label Generator for Weakly Supervised
Semantic Segmentation
- Authors: Peng-Tao Jiang, Yuqi Yang
- Abstract summary: This report explores the potential of 'prompt to masks' from the powerful class-agnostic large segmentation model, segment-anything.
Different weak labels are used as prompts to the segment-anything model, generating precise class masks.
Experiments demonstrate that segment-anything can serve as a good pseudo-label generator.
- Score: 8.59909129321984
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Weakly supervised semantic segmentation with weak labels is a long-lived
ill-posed problem. Mainstream methods mainly focus on improving the quality of
pseudo labels. In this report, we attempt to explore the potential of 'prompt
to masks' from the powerful class-agnostic large segmentation model,
segment-anything. Specifically, different weak labels are used as prompts to
the segment-anything model, generating precise class masks. The class masks are
utilized to generate pseudo labels to train the segmentation networks. We have
conducted extensive experiments on PASCAL VOC 2012 dataset. Experiments
demonstrate that segment-anything can serve as a good pseudo-label generator.
The code will be made publicly available.
Related papers
- Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets [51.74296438621836]
We introduce Scribbles for All, a label and training data generation algorithm for semantic segmentation trained on scribble labels.
The main limitation of scribbles as source for weak supervision is the lack of challenging datasets for scribble segmentation.
Scribbles for All provides scribble labels for several popular segmentation datasets and provides an algorithm to automatically generate scribble labels for any dataset with dense annotations.
arXiv Detail & Related papers (2024-08-22T15:29:08Z) - Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - SegRefiner: Towards Model-Agnostic Segmentation Refinement with Discrete
Diffusion Process [102.18226145874007]
We propose a model-agnostic solution called SegRefiner to enhance the quality of object masks produced by different segmentation models.
SegRefiner takes coarse masks as inputs and refines them using a discrete diffusion process.
It consistently improves both the segmentation metrics and boundary metrics across different types of coarse masks.
arXiv Detail & Related papers (2023-12-19T18:53:47Z) - Segment Anything Model (SAM) Enhanced Pseudo Labels for Weakly
Supervised Semantic Segmentation [30.812323329239614]
Weakly supervised semantic segmentation (WSSS) aims to bypass the need for laborious pixel-level annotation by using only image-level annotation.
Most existing methods rely on Class Activation Maps (CAM) to derive pixel-level pseudo-labels.
We introduce a simple yet effective method harnessing the Segment Anything Model (SAM), a class-agnostic foundation model capable of producing fine-grained instance masks of objects, parts, and subparts.
arXiv Detail & Related papers (2023-05-09T23:24:09Z) - Semantic Segmentation In-the-Wild Without Seeing Any Segmentation
Examples [34.97652735163338]
We propose a novel approach for creating semantic segmentation masks for every object.
Our method takes as input the image-level labels of the class categories present in the image.
The output of this stage provides pixel-level pseudo-labels, instead of the manual pixel-level labels required by supervised methods.
arXiv Detail & Related papers (2021-12-06T17:32:38Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Learning Class-Agnostic Pseudo Mask Generation for Box-Supervised
Semantic Segmentation [156.9155100983315]
We seek for a more accurate learning-based class-agnostic pseudo mask generator tailored to box-supervised semantic segmentation.
Our method can further close the performance gap between box-supervised and fully-supervised models.
arXiv Detail & Related papers (2021-03-09T14:54:54Z) - PCAMs: Weakly Supervised Semantic Segmentation Using Point Supervision [12.284208932393073]
This paper presents a novel procedure for producing semantic segmentation from images given some point level annotations.
We propose training a CNN that is normally fully supervised using our pseudo labels in place of ground truth labels.
Our method achieves state of the art results for point supervised semantic segmentation on the PASCAL VOC 2012 dataset citeeveringham2010pascal, even outperforming state of the art methods for stronger bounding box and squiggle supervision.
arXiv Detail & Related papers (2020-07-10T21:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.