SSSegmenation: An Open Source Supervised Semantic Segmentation Toolbox
Based on PyTorch
- URL: http://arxiv.org/abs/2305.17091v1
- Date: Fri, 26 May 2023 17:02:42 GMT
- Title: SSSegmenation: An Open Source Supervised Semantic Segmentation Toolbox
Based on PyTorch
- Authors: Zhenchao Jin
- Abstract summary: SSSegmenation is an open source supervised semantic image segmentation toolbox based on PyTorch.
It is motivated by MMSegmentation while it is easier to use because of fewer dependencies and achieves superior segmentation performance under a comparable training and testing setup.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents SSSegmenation, which is an open source supervised
semantic image segmentation toolbox based on PyTorch. The design of this
toolbox is motivated by MMSegmentation while it is easier to use because of
fewer dependencies and achieves superior segmentation performance under a
comparable training and testing setup. Moreover, the toolbox also provides
plenty of trained weights for popular and contemporary semantic segmentation
methods, including Deeplab, PSPNet, OCRNet, MaskFormer, \emph{etc}. We expect
that this toolbox can contribute to the future development of semantic
segmentation. Codes and model zoos are available at
\href{https://github.com/SegmentationBLWX/sssegmentation/}{SSSegmenation}.
Related papers
- Frozen CLIP: A Strong Backbone for Weakly Supervised Semantic Segmentation [90.35249276717038]
We propose WeCLIP, a CLIP-based single-stage pipeline, for weakly supervised semantic segmentation.
Specifically, the frozen CLIP model is applied as the backbone for semantic feature extraction.
A new decoder is designed to interpret extracted semantic features for final prediction.
arXiv Detail & Related papers (2024-06-17T03:49:47Z) - Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - SAMBA: A Trainable Segmentation Web-App with Smart Labelling [0.0]
SAMBA is a trainable segmentation tool that uses Meta's Segment Anything Model (SAM) for fast, high-quality label suggestions.
The segmentation backend is run in the cloud, so does not require the user to have powerful hardware.
arXiv Detail & Related papers (2023-12-07T10:31:05Z) - SAF-IS: a Spatial Annotation Free Framework for Instance Segmentation of
Surgical Tools [10.295921059528636]
We develop a framework for instance segmentation not relying on spatial annotations for training.
Our solution only requires binary tool masks, obtainable using recent unsupervised approaches, and binary tool presence labels.
We validate our framework on the EndoVis 2017 and 2018 segmentation datasets.
arXiv Detail & Related papers (2023-09-04T17:13:06Z) - HGFormer: Hierarchical Grouping Transformer for Domain Generalized
Semantic Segmentation [113.6560373226501]
This work studies semantic segmentation under the domain generalization setting.
We propose a novel hierarchical grouping transformer (HGFormer) to explicitly group pixels to form part-level masks and then whole-level masks.
Experiments show that HGFormer yields more robust semantic segmentation results than per-pixel classification methods and flat grouping transformers.
arXiv Detail & Related papers (2023-05-22T13:33:41Z) - SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary
Semantic Segmentation [26.079055078561986]
We propose a CLIP-based model named SegCLIP for the topic of open-vocabulary segmentation.
The main idea is to gather patches with learnable centers to semantic regions through training on text-image pairs.
Experimental results show that our model achieves comparable or superior segmentation accuracy.
arXiv Detail & Related papers (2022-11-27T12:38:52Z) - Task-Adaptive Feature Transformer with Semantic Enrichment for Few-Shot
Segmentation [21.276981570672064]
Few-shot learning allows machines to classify novel classes using only a few labeled samples.
We propose a learnable module that can be placed on top of existing segmentation networks for performing few-shot segmentation.
Experiments on PASCAL-$5i$ and COCO-$20i$ datasets confirm that the added modules successfully extend the capability of existing segmentators.
arXiv Detail & Related papers (2022-02-14T06:16:26Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - Learning Class-Agnostic Pseudo Mask Generation for Box-Supervised
Semantic Segmentation [156.9155100983315]
We seek for a more accurate learning-based class-agnostic pseudo mask generator tailored to box-supervised semantic segmentation.
Our method can further close the performance gap between box-supervised and fully-supervised models.
arXiv Detail & Related papers (2021-03-09T14:54:54Z) - Semantically Meaningful Class Prototype Learning for One-Shot Image
Semantic Segmentation [58.96902899546075]
One-shot semantic image segmentation aims to segment the object regions for the novel class with only one annotated image.
Recent works adopt the episodic training strategy to mimic the expected situation at testing time.
We propose to leverage the multi-class label information during the episodic training. It will encourage the network to generate more semantically meaningful features for each category.
arXiv Detail & Related papers (2021-02-22T12:07:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.