Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning
- URL: http://arxiv.org/abs/2110.05474v1
- Date: Mon, 11 Oct 2021 17:59:55 GMT
- Title: Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning
- Authors: Hanzhe Hu, Fangyun Wei, Han Hu, Qiwei Ye, Jinshi Cui, Liwei Wang
- Abstract summary: We propose a novel framework for semi-supervised semantic segmentation, named adaptive equalization learning (AEL)
AEL balances the training of well and badly performed categories, with a confidence bank to track category-wise performance.
AEL outperforms the state-of-the-art methods by a large margin on the Cityscapes and Pascal VOC benchmarks.
- Score: 20.66927648806676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the limited and even imbalanced data, semi-supervised semantic
segmentation tends to have poor performance on some certain categories, e.g.,
tailed categories in Cityscapes dataset which exhibits a long-tailed label
distribution. Existing approaches almost all neglect this problem, and treat
categories equally. Some popular approaches such as consistency regularization
or pseudo-labeling may even harm the learning of under-performing categories,
that the predictions or pseudo labels of these categories could be too
inaccurate to guide the learning on the unlabeled data. In this paper, we look
into this problem, and propose a novel framework for semi-supervised semantic
segmentation, named adaptive equalization learning (AEL). AEL adaptively
balances the training of well and badly performed categories, with a confidence
bank to dynamically track category-wise performance during training. The
confidence bank is leveraged as an indicator to tilt training towards
under-performing categories, instantiated in three strategies: 1) adaptive
Copy-Paste and CutMix data augmentation approaches which give more chance for
under-performing categories to be copied or cut; 2) an adaptive data sampling
approach to encourage pixels from under-performing category to be sampled; 3) a
simple yet effective re-weighting method to alleviate the training noise raised
by pseudo-labeling. Experimentally, AEL outperforms the state-of-the-art
methods by a large margin on the Cityscapes and Pascal VOC benchmarks under
various data partition protocols. Code is available at
https://github.com/hzhupku/SemiSeg-AEL
Related papers
- ECAP: Extensive Cut-and-Paste Augmentation for Unsupervised Domain
Adaptive Semantic Segmentation [4.082799056366928]
We propose an extensive cut-and-paste strategy (ECAP) to leverage reliable pseudo-labels through data augmentation.
ECAP maintains a memory bank of pseudo-labeled target samples throughout training and cut-and-pastes the most confident ones onto the current training batch.
We implement ECAP on top of the recent method MIC and boost its performance on two synthetic-to-real domain adaptation benchmarks.
arXiv Detail & Related papers (2024-03-06T17:06:07Z) - JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - Bridging the Gap: Learning Pace Synchronization for Open-World Semi-Supervised Learning [44.91863420044712]
In open-world semi-supervised learning, a machine learning model is tasked with uncovering novel categories from unlabeled data.
We introduce 1) the adaptive synchronizing marginal loss which imposes class-specific negative margins to alleviate the model bias towards seen classes, and 2) the pseudo-label contrastive clustering which exploits pseudo-labels predicted by the model to group unlabeled data from the same category together.
Our method balances the learning pace between seen and novel classes, achieving a remarkable 3% average accuracy increase on the ImageNet dataset.
arXiv Detail & Related papers (2023-09-21T09:44:39Z) - Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - CAFS: Class Adaptive Framework for Semi-Supervised Semantic Segmentation [5.484296906525601]
Semi-supervised semantic segmentation learns a model for classifying pixels into specific classes using a few labeled samples and numerous unlabeled images.
We propose a class-adaptive semisupervision framework for semi-supervised semantic segmentation (CAFS)
CAFS constructs a validation set on a labeled dataset, to leverage the calibration performance for each class.
arXiv Detail & Related papers (2023-03-21T05:56:53Z) - PercentMatch: Percentile-based Dynamic Thresholding for Multi-Label
Semi-Supervised Classification [64.39761523935613]
We propose a percentile-based threshold adjusting scheme to dynamically alter the score thresholds of positive and negative pseudo-labels for each class during the training.
We achieve strong performance on Pascal VOC2007 and MS-COCO datasets when compared to recent SSL methods.
arXiv Detail & Related papers (2022-08-30T01:27:48Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Adaptive Class Suppression Loss for Long-Tail Object Detection [49.7273558444966]
We devise a novel Adaptive Class Suppression Loss (ACSL) to improve the detection performance of tail categories.
Our ACSL achieves 5.18% and 5.2% improvements with ResNet50-FPN, and sets a new state of the art.
arXiv Detail & Related papers (2021-04-02T05:12:31Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.