CAFS: Class Adaptive Framework for Semi-Supervised Semantic Segmentation
- URL: http://arxiv.org/abs/2303.11606v1
- Date: Tue, 21 Mar 2023 05:56:53 GMT
- Title: CAFS: Class Adaptive Framework for Semi-Supervised Semantic Segmentation
- Authors: Jingi Ju, Hyeoncheol Noh, Yooseung Wang, Minseok Seo, Dong-Geol Choi
- Abstract summary: Semi-supervised semantic segmentation learns a model for classifying pixels into specific classes using a few labeled samples and numerous unlabeled images.
We propose a class-adaptive semisupervision framework for semi-supervised semantic segmentation (CAFS)
CAFS constructs a validation set on a labeled dataset, to leverage the calibration performance for each class.
- Score: 5.484296906525601
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised semantic segmentation learns a model for classifying pixels
into specific classes using a few labeled samples and numerous unlabeled
images. The recent leading approach is consistency regularization by
selftraining with pseudo-labeling pixels having high confidences for unlabeled
images. However, using only highconfidence pixels for self-training may result
in losing much of the information in the unlabeled datasets due to poor
confidence calibration of modern deep learning networks. In this paper, we
propose a class-adaptive semisupervision framework for semi-supervised semantic
segmentation (CAFS) to cope with the loss of most information that occurs in
existing high-confidence-based pseudolabeling methods. Unlike existing
semi-supervised semantic segmentation frameworks, CAFS constructs a validation
set on a labeled dataset, to leverage the calibration performance for each
class. On this basis, we propose a calibration aware class-wise adaptive
thresholding and classwise adaptive oversampling using the analysis results
from the validation set. Our proposed CAFS achieves state-ofthe-art performance
on the full data partition of the base PASCAL VOC 2012 dataset and on the 1/4
data partition of the Cityscapes dataset with significant margins of 83.0% and
80.4%, respectively. The code is available at https://github.com/cjf8899/CAFS.
Related papers
- A Lightweight Clustering Framework for Unsupervised Semantic
Segmentation [28.907274978550493]
Unsupervised semantic segmentation aims to categorize each pixel in an image into a corresponding class without the use of annotated data.
We propose a lightweight clustering framework for unsupervised semantic segmentation.
Our framework achieves state-of-the-art results on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-11-30T15:33:42Z) - JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - Estimating label quality and errors in semantic segmentation data via
any model [19.84626033109009]
We study methods to score label quality, such that the images with the lowest scores are least likely to be correctly labeled.
This helps prioritize what data to review in order to ensure a high-quality training/evaluation dataset.
arXiv Detail & Related papers (2023-07-11T07:29:09Z) - High-fidelity Pseudo-labels for Boosting Weakly-Supervised Segmentation [17.804090651425955]
Image-level weakly-supervised segmentation (WSSS) reduces the usually vast data annotation cost by surrogate segmentation masks during training.
Our work is based on two techniques for improving CAMs; importance sampling, which is a substitute for GAP, and the feature similarity loss.
We reformulate both techniques based on binomial posteriors of multiple independent binary problems.
This has two benefits; their performance is improved and they become more general, resulting in an add-on method that can boost virtually any WSSS method.
arXiv Detail & Related papers (2023-04-05T17:43:57Z) - Contrastive Model Adaptation for Cross-Condition Robustness in Semantic
Segmentation [58.17907376475596]
We investigate normal-to-adverse condition model adaptation for semantic segmentation.
Our method -- CMA -- leverages such image pairs to learn condition-invariant features via contrastive learning.
We achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks.
arXiv Detail & Related papers (2023-03-09T11:48:29Z) - Dense FixMatch: a simple semi-supervised learning method for pixel-wise
prediction tasks [68.36996813591425]
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks.
We enable the application of FixMatch in semi-supervised learning problems beyond image classification by adding a matching operation on the pseudo-labels.
Dense FixMatch significantly improves results compared to supervised learning using only labeled data, approaching its performance with 1/4 of the labeled samples.
arXiv Detail & Related papers (2022-10-18T15:02:51Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive
Learning from a Class-wise Memory Bank [5.967279020820772]
We propose a novel representation learning module based on contrastive learning.
This module enforces the segmentation network to yield similar pixel-level feature representations for same-class samples.
In an end-to-end training, the features from both labeled and unlabeled data are optimized to be similar to same-class samples from the memory bank.
arXiv Detail & Related papers (2021-04-27T18:19:33Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.