Causal Unsupervised Semantic Segmentation
- URL: http://arxiv.org/abs/2310.07379v1
- Date: Wed, 11 Oct 2023 10:54:44 GMT
- Title: Causal Unsupervised Semantic Segmentation
- Authors: Junho Kim, Byung-Kwan Lee, Yong Man Ro
- Abstract summary: Unsupervised semantic segmentation aims to achieve high-quality semantic grouping without human-labeled annotations.
We propose a novel framework, CAusal Unsupervised Semantic sEgmentation (CAUSE), which leverages insights from causal inference.
- Score: 60.178274138753174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised semantic segmentation aims to achieve high-quality semantic
grouping without human-labeled annotations. With the advent of self-supervised
pre-training, various frameworks utilize the pre-trained features to train
prediction heads for unsupervised dense prediction. However, a significant
challenge in this unsupervised setup is determining the appropriate level of
clustering required for segmenting concepts. To address it, we propose a novel
framework, CAusal Unsupervised Semantic sEgmentation (CAUSE), which leverages
insights from causal inference. Specifically, we bridge intervention-oriented
approach (i.e., frontdoor adjustment) to define suitable two-step tasks for
unsupervised prediction. The first step involves constructing a concept
clusterbook as a mediator, which represents possible concept prototypes at
different levels of granularity in a discretized form. Then, the mediator
establishes an explicit link to the subsequent concept-wise self-supervised
learning for pixel-level grouping. Through extensive experiments and analyses
on various datasets, we corroborate the effectiveness of CAUSE and achieve
state-of-the-art performance in unsupervised semantic segmentation.
Related papers
- A Survey on Label-efficient Deep Segmentation: Bridging the Gap between
Weak Supervision and Dense Prediction [115.9169213834476]
This paper offers a comprehensive review on label-efficient segmentation methods.
We first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels.
Next, we summarize the existing label-efficient segmentation methods from a unified perspective.
arXiv Detail & Related papers (2022-07-04T06:21:01Z) - Unsupervised Hierarchical Semantic Segmentation with Multiview
Cosegmentation and Clustering Transformers [47.45830503277631]
Grouping naturally has levels of granularity, creating ambiguity in unsupervised segmentation.
We deliver the first data-driven unsupervised hierarchical semantic segmentation method called Hierarchical Segment Grouping (HSG)
arXiv Detail & Related papers (2022-04-25T04:40:46Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Unsupervised Cross-Lingual Transfer of Structured Predictors without
Source Data [37.1075911292287]
We show that the means of aggregating over the input models is critical, and that multiplying marginal probabilities of substructures to obtain high-probability structures for distant supervision is substantially better than taking the union over the input models.
Testing on 18 languages, we demonstrate that the method works in a cross-lingual setting, considering both dependency parsing and part-of-speech structured prediction problems.
Our analyses show that the proposed method produces less noisy labels for the distant supervision.
arXiv Detail & Related papers (2021-10-08T02:46:34Z) - Semi-Supervised Segmentation of Concrete Aggregate Using Consensus
Regularisation and Prior Guidance [2.1749194587826026]
We propose a novel semi-supervised framework for semantic segmentation, introducing additional losses based on prior knowledge.
Experiments performed on our "concrete aggregate dataset" demonstrate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-04-22T13:01:28Z) - Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals [78.12377360145078]
We introduce a novel two-step framework that adopts a predetermined prior in a contrastive optimization objective to learn pixel embeddings.
This marks a large deviation from existing works that relied on proxy tasks or end-to-end clustering.
In particular, when fine-tuning the learned representations using just 1% of labeled examples on PASCAL, we outperform supervised ImageNet pre-training by 7.1% mIoU.
arXiv Detail & Related papers (2021-02-11T18:54:47Z) - Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain
Adaptation using Structurally Regularized Deep Clustering [119.88565565454378]
Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain.
We propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one.
Our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings.
arXiv Detail & Related papers (2020-12-08T08:52:00Z) - Unsupervised Part Discovery by Unsupervised Disentanglement [10.664434993386525]
Part segmentations provide information about part localizations on the level of individual pixels.
Large annotation costs limit the scalability of supervised algorithms to other object categories.
Our work demonstrates the feasibility to discover semantic part segmentations without supervision.
arXiv Detail & Related papers (2020-09-09T12:34:37Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.