Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation
- URL: http://arxiv.org/abs/2108.01344v1
- Date: Tue, 3 Aug 2021 07:48:33 GMT
- Title: Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation
- Authors: Xiangrong Zhang, Zelin Peng, Peng Zhu, Tianyang Zhang, Chen Li, Huiyu
Zhou, Licheng Jiao
- Abstract summary: In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
- Score: 48.294903659573585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation has been continuously investigated in the last ten
years, and majority of the established technologies are based on supervised
models. In recent years, image-level weakly supervised semantic segmentation
(WSSS), including single- and multi-stage process, has attracted large
attention due to data labeling efficiency. In this paper, we propose to embed
affinity learning of multi-stage approaches in a single-stage model. To be
specific, we introduce an adaptive affinity loss to thoroughly learn the local
pairwise affinity. As such, a deep neural network is used to deliver
comprehensive semantic information in the training phase, whilst improving the
performance of the final prediction module. On the other hand, considering the
existence of errors in the pseudo labels, we propose a novel label reassign
loss to mitigate over-fitting. Extensive experiments are conducted on the
PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach
that outperforms other standard single-stage methods and achieves comparable
performance against several multi-stage methods.
Related papers
- OTMatch: Improving Semi-Supervised Learning with Optimal Transport [2.4355694259330467]
We present a new approach called OTMatch, which leverages semantic relationships among classes by employing an optimal transport loss function to match distributions.
The empirical results show improvements in our method above baseline, this demonstrates the effectiveness and superiority of our approach in harnessing semantic relationships to enhance learning performance in a semi-supervised setting.
arXiv Detail & Related papers (2023-10-26T15:01:54Z) - Self-aware and Cross-sample Prototypical Learning for Semi-supervised
Medical Image Segmentation [10.18427897663732]
Consistency learning plays a crucial role in semi-supervised medical image segmentation.
It enables the effective utilization of limited annotated data while leveraging the abundance of unannotated data.
We propose a self-aware and cross-sample prototypical learning method ( SCP-Net) to enhance the diversity of prediction in consistency learning.
arXiv Detail & Related papers (2023-05-25T16:22:04Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - Demystifying Unsupervised Semantic Correspondence Estimation [13.060538447838303]
We explore semantic correspondence estimation through the lens of unsupervised learning.
We thoroughly evaluate several recently proposed unsupervised methods across multiple challenging datasets.
We introduce a new unsupervised correspondence approach which utilizes the strength of pre-trained features while encouraging better matches during training.
arXiv Detail & Related papers (2022-07-11T17:59:51Z) - Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning [43.51182049644767]
Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
arXiv Detail & Related papers (2022-02-24T06:00:05Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.