Human-Centered Unsupervised Segmentation Fusion
- URL: http://arxiv.org/abs/2007.11361v1
- Date: Wed, 22 Jul 2020 12:18:31 GMT
- Title: Human-Centered Unsupervised Segmentation Fusion
- Authors: Gregor Koporec and Janez Per\v{s}
- Abstract summary: We introduce a new segmentation fusion model that is based on K-Modes clustering.
Results obtained from publicly available datasets with human ground truth segmentations clearly show that our model outperforms the state-of-the-art on human segmentations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation is generally an ill-posed problem since it results in multiple
solutions and is, therefore, hard to define ground truth data to evaluate
algorithms. The problem can be naively surpassed by using only one annotator
per image, but such acquisition doesn't represent the cognitive perception of
an image by the majority of people. Nowadays, it is not difficult to obtain
multiple segmentations with crowdsourcing, so the only problem that stays is
how to get one ground truth segmentation per image. There already exist
numerous algorithmic solutions, but most methods are supervised or don't
consider confidence per human segmentation. In this paper, we introduce a new
segmentation fusion model that is based on K-Modes clustering. Results obtained
from publicly available datasets with human ground truth segmentations clearly
show that our model outperforms the state-of-the-art on human segmentations.
Related papers
- Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - Exploring Open-Vocabulary Semantic Segmentation without Human Labels [76.15862573035565]
We present ZeroSeg, a novel method that leverages the existing pretrained vision-language model (VL) to train semantic segmentation models.
ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image.
Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data.
arXiv Detail & Related papers (2023-06-01T08:47:06Z) - Self-Supervised Instance Segmentation by Grasping [84.2469669256257]
We learn a grasp segmentation model to segment the grasped object from before and after grasp images.
Using the segmented objects, we can "cut" objects from their original scenes and "paste" them into new scenes to generate instance supervision.
We show that our grasp segmentation model provides a 5x error reduction when segmenting grasped objects compared with traditional image subtraction approaches.
arXiv Detail & Related papers (2023-05-10T16:51:36Z) - FixMatchSeg: Fixing FixMatch for Semi-Supervised Semantic Segmentation [0.24366811507669117]
Supervised deep learning methods for semantic medical image segmentation are getting increasingly popular in the past few years.
In resource constrained settings, getting large number of annotated images is very difficult as it mostly requires experts.
In this work, we adapt a state-of-the-art semi-supervised classification method FixMatch to semantic segmentation task, introducing FixMatchSeg.
arXiv Detail & Related papers (2022-07-31T09:14:52Z) - Duo-SegNet: Adversarial Dual-Views for Semi-Supervised Medical Image
Segmentation [14.535295064959746]
We propose a semi-supervised image segmentation technique based on the concept of multi-view learning.
Our proposed method outperforms state-of-the-art medical image segmentation algorithms consistently and comfortably.
arXiv Detail & Related papers (2021-08-25T10:16:12Z) - Personalized Image Semantic Segmentation [58.980245748434]
We generate more accurate segmentation results on unlabeled personalized images by investigating the data's personalized traits.
We propose a baseline method that incorporates the inter-image context when segmenting certain images.
The code and the PIS dataset will be made publicly available.
arXiv Detail & Related papers (2021-07-24T04:03:11Z) - Exposing Semantic Segmentation Failures via Maximum Discrepancy
Competition [102.75463782627791]
We take steps toward answering the question by exposing failures of existing semantic segmentation methods in the open visual world.
Inspired by previous research on model falsification, we start from an arbitrarily large image set, and automatically sample a small image set by MAximizing the Discrepancy (MAD) between two segmentation methods.
The selected images have the greatest potential in falsifying either (or both) of the two methods.
A segmentation method, whose failures are more difficult to be exposed in the MAD competition, is considered better.
arXiv Detail & Related papers (2021-02-27T16:06:25Z) - Information-Theoretic Segmentation by Inpainting Error Maximization [30.520622129165456]
We group image pixels into foreground and background, with the goal of minimizing predictability of one set from the other.
Our method does not involve training deep networks, is computationally cheap, class-agnostic, and even applicable in isolation to a single unlabeled image.
arXiv Detail & Related papers (2020-12-14T06:42:27Z) - DenoiSeg: Joint Denoising and Segmentation [75.91760529986958]
We propose DenoiSeg, a new method that can be trained end-to-end on only a few annotated ground truth segmentations.
We achieve this by extending Noise2Void, a self-supervised denoising scheme that can be trained on noisy images alone, to also predict dense 3-class segmentations.
arXiv Detail & Related papers (2020-05-06T17:42:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.