Unsupervised Universal Image Segmentation
- URL: http://arxiv.org/abs/2312.17243v1
- Date: Thu, 28 Dec 2023 18:59:04 GMT
- Title: Unsupervised Universal Image Segmentation
- Authors: Dantong Niu, Xudong Wang, Xinyang Han, Long Lian, Roei Herzig, Trevor
Darrell
- Abstract summary: We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
- Score: 59.0383635597103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several unsupervised image segmentation approaches have been proposed which
eliminate the need for dense manually-annotated segmentation masks; current
models separately handle either semantic segmentation (e.g., STEGO) or
class-agnostic instance segmentation (e.g., CutLER), but not both (i.e.,
panoptic segmentation). We propose an Unsupervised Universal Segmentation model
(U2Seg) adept at performing various image segmentation tasks -- instance,
semantic and panoptic -- using a novel unified framework. U2Seg generates
pseudo semantic labels for these segmentation tasks via leveraging
self-supervised models followed by clustering; each cluster represents
different semantic and/or instance membership of pixels. We then self-train the
model on these pseudo semantic labels, yielding substantial performance gains
over specialized methods tailored to each task: a +2.6 AP$^{\text{box}}$ boost
vs. CutLER in unsupervised instance segmentation on COCO and a +7.0 PixelAcc
increase (vs. STEGO) in unsupervised semantic segmentation on COCOStuff.
Moreover, our method sets up a new baseline for unsupervised panoptic
segmentation, which has not been previously explored. U2Seg is also a strong
pretrained model for few-shot segmentation, surpassing CutLER by +5.0
AP$^{\text{mask}}$ when trained on a low-data regime, e.g., only 1% COCO
labels. We hope our simple yet effective method can inspire more research on
unsupervised universal image segmentation.
Related papers
- Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals [15.258631373740686]
Unsupervised semantic segmentation aims to automatically partition images into semantically meaningful regions by identifying global semantic categories within an image corpus without any form of annotation.
We present PriMaPs - Principal Mask Proposals - decomposing images into semantically meaningful masks based on their feature representation.
This allows us to realize unsupervised semantic segmentation by fitting class prototypes to PriMaPs with a expectation-maximization algorithm, PriMaPs-EM.
arXiv Detail & Related papers (2024-04-25T17:58:09Z) - SOHES: Self-supervised Open-world Hierarchical Entity Segmentation [82.45303116125021]
This work presents Self-supervised Open-world Hierarchical Entities (SOHES), a novel approach that eliminates the need for human annotations.
We produce abundant high-quality pseudo-labels through visual feature clustering, and rectify the noises in pseudo-labels via a teacher- mutual-learning procedure.
Using raw images as the sole training data, our method achieves unprecedented performance in self-supervised open-world segmentation.
arXiv Detail & Related papers (2024-04-18T17:59:46Z) - A Lightweight Clustering Framework for Unsupervised Semantic
Segmentation [28.907274978550493]
Unsupervised semantic segmentation aims to categorize each pixel in an image into a corresponding class without the use of annotated data.
We propose a lightweight clustering framework for unsupervised semantic segmentation.
Our framework achieves state-of-the-art results on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-11-30T15:33:42Z) - Synthetic Instance Segmentation from Semantic Image Segmentation Masks [15.477053085267404]
We propose a novel paradigm called Synthetic Instance (SISeg)
SISeg instance segmentation results by leveraging image masks generated by existing semantic segmentation models.
In other words, the proposed model does not need extra manpower or higher computational expenses.
arXiv Detail & Related papers (2023-08-02T05:13:02Z) - Unsupervised Hierarchical Semantic Segmentation with Multiview
Cosegmentation and Clustering Transformers [47.45830503277631]
Grouping naturally has levels of granularity, creating ambiguity in unsupervised segmentation.
We deliver the first data-driven unsupervised hierarchical semantic segmentation method called Hierarchical Segment Grouping (HSG)
arXiv Detail & Related papers (2022-04-25T04:40:46Z) - SOLO: A Simple Framework for Instance Segmentation [84.00519148562606]
"instance categories" assigns categories to each pixel within an instance according to the instance's location.
"SOLO" is a simple, direct, and fast framework for instance segmentation with strong performance.
Our approach achieves state-of-the-art results for instance segmentation in terms of both speed and accuracy.
arXiv Detail & Related papers (2021-06-30T09:56:54Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.