Human-Centered Unsupervised Segmentation Fusion
- URL: http://arxiv.org/abs/2007.11361v1
- Date: Wed, 22 Jul 2020 12:18:31 GMT
- Title: Human-Centered Unsupervised Segmentation Fusion
- Authors: Gregor Koporec and Janez Per\v{s}
- Abstract summary: We introduce a new segmentation fusion model that is based on K-Modes clustering.
Results obtained from publicly available datasets with human ground truth segmentations clearly show that our model outperforms the state-of-the-art on human segmentations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation is generally an ill-posed problem since it results in multiple
solutions and is, therefore, hard to define ground truth data to evaluate
algorithms. The problem can be naively surpassed by using only one annotator
per image, but such acquisition doesn't represent the cognitive perception of
an image by the majority of people. Nowadays, it is not difficult to obtain
multiple segmentations with crowdsourcing, so the only problem that stays is
how to get one ground truth segmentation per image. There already exist
numerous algorithmic solutions, but most methods are supervised or don't
consider confidence per human segmentation. In this paper, we introduce a new
segmentation fusion model that is based on K-Modes clustering. Results obtained
from publicly available datasets with human ground truth segmentations clearly
show that our model outperforms the state-of-the-art on human segmentations.
Related papers
- UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation [64.01742988773745]
An increasing privacy concern exists regarding training large-scale image segmentation models on unauthorized private data.
We exploit the concept of unlearnable examples to make images unusable to model training by generating and adding unlearnable noise into the original images.
We empirically verify the effectiveness of UnSeg across 6 mainstream image segmentation tasks, 10 widely used datasets, and 7 different network architectures.
arXiv Detail & Related papers (2024-10-13T16:34:46Z) - Image Segmentation in Foundation Model Era: A Survey [99.19456390358211]
Current research in image segmentation lacks a detailed analysis of distinct characteristics, challenges, and solutions associated with these advancements.
This survey seeks to fill this gap by providing a thorough review of cutting-edge research centered around FM-driven image segmentation.
An exhaustive overview of over 300 segmentation approaches is provided to encapsulate the breadth of current research efforts.
arXiv Detail & Related papers (2024-08-23T10:07:59Z) - Unsupervised Universal Image Segmentation [59.0383635597103]
We propose an Unsupervised Universal model (U2Seg) adept at performing various image segmentation tasks.
U2Seg generates pseudo semantic labels for these segmentation tasks via leveraging self-supervised models.
We then self-train the model on these pseudo semantic labels, yielding substantial performance gains.
arXiv Detail & Related papers (2023-12-28T18:59:04Z) - Exploring Open-Vocabulary Semantic Segmentation without Human Labels [76.15862573035565]
We present ZeroSeg, a novel method that leverages the existing pretrained vision-language model (VL) to train semantic segmentation models.
ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image.
Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data.
arXiv Detail & Related papers (2023-06-01T08:47:06Z) - Duo-SegNet: Adversarial Dual-Views for Semi-Supervised Medical Image
Segmentation [14.535295064959746]
We propose a semi-supervised image segmentation technique based on the concept of multi-view learning.
Our proposed method outperforms state-of-the-art medical image segmentation algorithms consistently and comfortably.
arXiv Detail & Related papers (2021-08-25T10:16:12Z) - Personalized Image Semantic Segmentation [58.980245748434]
We generate more accurate segmentation results on unlabeled personalized images by investigating the data's personalized traits.
We propose a baseline method that incorporates the inter-image context when segmenting certain images.
The code and the PIS dataset will be made publicly available.
arXiv Detail & Related papers (2021-07-24T04:03:11Z) - Exposing Semantic Segmentation Failures via Maximum Discrepancy
Competition [102.75463782627791]
We take steps toward answering the question by exposing failures of existing semantic segmentation methods in the open visual world.
Inspired by previous research on model falsification, we start from an arbitrarily large image set, and automatically sample a small image set by MAximizing the Discrepancy (MAD) between two segmentation methods.
The selected images have the greatest potential in falsifying either (or both) of the two methods.
A segmentation method, whose failures are more difficult to be exposed in the MAD competition, is considered better.
arXiv Detail & Related papers (2021-02-27T16:06:25Z) - Information-Theoretic Segmentation by Inpainting Error Maximization [30.520622129165456]
We group image pixels into foreground and background, with the goal of minimizing predictability of one set from the other.
Our method does not involve training deep networks, is computationally cheap, class-agnostic, and even applicable in isolation to a single unlabeled image.
arXiv Detail & Related papers (2020-12-14T06:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.