Instance-specific and Model-adaptive Supervision for Semi-supervised
Semantic Segmentation
- URL: http://arxiv.org/abs/2211.11335v1
- Date: Mon, 21 Nov 2022 10:37:28 GMT
- Title: Instance-specific and Model-adaptive Supervision for Semi-supervised
Semantic Segmentation
- Authors: Zhen Zhao and Sifan Long and Jimin Pi and Jingdong Wang and Luping
Zhou
- Abstract summary: We propose an instance-specific and model-adaptive supervision for semi-supervised semantic segmentation, named iMAS.
iMAS learns from unlabeled instances progressively by weighing their corresponding consistency losses based on the evaluated hardness.
- Score: 49.82432158155329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, semi-supervised semantic segmentation has achieved promising
performance with a small fraction of labeled data. However, most existing
studies treat all unlabeled data equally and barely consider the differences
and training difficulties among unlabeled instances. Differentiating unlabeled
instances can promote instance-specific supervision to adapt to the model's
evolution dynamically. In this paper, we emphasize the cruciality of instance
differences and propose an instance-specific and model-adaptive supervision for
semi-supervised semantic segmentation, named iMAS. Relying on the model's
performance, iMAS employs a class-weighted symmetric intersection-over-union to
evaluate quantitative hardness of each unlabeled instance and supervises the
training on unlabeled data in a model-adaptive manner. Specifically, iMAS
learns from unlabeled instances progressively by weighing their corresponding
consistency losses based on the evaluated hardness. Besides, iMAS dynamically
adjusts the augmentation for each instance such that the distortion degree of
augmented instances is adapted to the model's generalization capability across
the training course. Not integrating additional losses and training procedures,
iMAS can obtain remarkable performance gains against current state-of-the-art
approaches on segmentation benchmarks under different semi-supervised partition
protocols.
Related papers
- Instance-wise Uncertainty for Class Imbalance in Semantic Segmentation [4.147659576493158]
State of the art methods increasingly rely on deep learning models, known to incorrectly estimate uncertainty and be overconfident in predictions.
This is particularly problematic in semantic segmentation due to inherent class imbalance.
A novel training methodology specifically designed for semantic segmentation is presented.
arXiv Detail & Related papers (2024-07-17T14:38:32Z) - cDP-MIL: Robust Multiple Instance Learning via Cascaded Dirichlet Process [23.266122629592807]
Multiple instance learning (MIL) has been extensively applied to whole slide histoparametric image (WSI) analysis.
The existing aggregation strategy in MIL, which primarily relies on the first-order distance between instances, fails to accurately approximate the true feature distribution of each instance.
We propose a new Bayesian nonparametric framework for multiple instance learning, which adopts a cascade of Dirichlet processes (cDP) to incorporate the instance-to-bag characteristic of the WSIs.
arXiv Detail & Related papers (2024-07-16T07:28:39Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Instance-aware Model Ensemble With Distillation For Unsupervised Domain
Adaptation [28.79286984013436]
We propose a novel framework, namely Instance aware Model Ensemble With Distillation, IMED.
IMED fuses multiple UDA component models adaptively according to different instances and distills these components into a small model.
We show the superiority of the model based on IMED to the state of the art methods under the comparable computation cost.
arXiv Detail & Related papers (2022-11-15T12:53:23Z) - Controller-Guided Partial Label Consistency Regularization with
Unlabeled Data [49.24911720809604]
We propose a controller-guided consistency regularization at both the label-level and representation-level.
We dynamically adjust the confidence thresholds so that the number of samples of each class participating in consistency regularization remains roughly equal to alleviate the problem of class-imbalance.
arXiv Detail & Related papers (2022-10-20T12:15:13Z) - PAC Generalization via Invariant Representations [41.02828564338047]
We consider the notion of $epsilon$-approximate invariance in a finite sample setting.
Inspired by PAC learning, we obtain finite-sample out-of-distribution generalization guarantees.
Our results show bounds that do not scale in ambient dimension when intervention sites are restricted to lie in a constant size subset of in-degree bounded nodes.
arXiv Detail & Related papers (2022-05-30T15:50:14Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z) - Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation [48.294903659573585]
In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-08-03T07:48:33Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.