Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation
- URL: http://arxiv.org/abs/2203.02792v1
- Date: Sat, 5 Mar 2022 17:36:17 GMT
- Title: Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation
- Authors: Cong Cao, Tianwei Lin, Dongliang He, Fu Li, Huanjing Yue, Jingyu Yang,
Errui Ding
- Abstract summary: We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
- Score: 70.2166826794421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common challenge posed to robust semantic segmentation is the expensive
data annotation cost. Existing semi-supervised solutions show great potential
toward solving this problem. Their key idea is constructing consistency
regularization with unsupervised data augmentation from unlabeled data for
model training. The perturbations for unlabeled data enable the consistency
training loss, which benefits semi-supervised semantic segmentation. However,
these perturbations destroy image context and introduce unnatural boundaries,
which is harmful for semantic segmentation. Besides, the widely adopted
semi-supervised learning framework, i.e. mean-teacher, suffers performance
limitation since the student model finally converges to the teacher model. In
this paper, first of all, we propose a context friendly differentiable
geometric warping to conduct unsupervised data augmentation; secondly, a novel
adversarial dual-student framework is proposed to improve the Mean-Teacher from
the following two aspects: (1) dual student models are learnt independently
except for a stabilization constraint to encourage exploiting model
diversities; (2) adversarial training scheme is applied to both students and
the discriminators are resorted to distinguish reliable pseudo-label of
unlabeled data for self-training. Effectiveness is validated via extensive
experiments on PASCAL VOC2012 and Citescapes. Our solution significantly
improves the performance and state-of-the-art results are achieved on both
datasets. Remarkably, compared with fully supervision, our solution achieves
comparable mIoU of 73.4% using only 12.5% annotated data on PASCAL VOC2012.
Related papers
- Effective and Robust Adversarial Training against Data and Label Corruptions [35.53386268796071]
Corruptions due to data perturbations and label noise are prevalent in the datasets from unreliable sources.
We develop an Effective and Robust Adversarial Training framework to simultaneously handle two types of corruption.
arXiv Detail & Related papers (2024-05-07T10:53:20Z) - Robust Training of Federated Models with Extremely Label Deficiency [84.00832527512148]
Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.
We propose a novel twin-model paradigm, called Twin-sight, designed to enhance mutual guidance by providing insights from different perspectives of labeled and unlabeled data.
Our comprehensive experiments on four benchmark datasets provide substantial evidence that Twin-sight can significantly outperform state-of-the-art methods across various experimental settings.
arXiv Detail & Related papers (2024-02-22T10:19:34Z) - Cross-head mutual Mean-Teaching for semi-supervised medical image
segmentation [6.738522094694818]
Semi-supervised medical image segmentation (SSMIS) has witnessed substantial advancements by leveraging limited labeled data and abundant unlabeled data.
Existing state-of-the-art (SOTA) methods encounter challenges in accurately predicting labels for the unlabeled data.
We propose a novel Cross-head mutual mean-teaching Network (CMMT-Net) incorporated strong-weak data augmentation.
arXiv Detail & Related papers (2023-10-08T09:13:04Z) - Scaling Up Semi-supervised Learning with Unconstrained Unlabelled Data [27.75143621836449]
We propose UnMixMatch, a semi-supervised learning framework which can learn effective representations from unconstrained data.
We perform extensive experiments on 4 commonly used datasets and demonstrate superior performance over existing semi-supervised methods with a performance boost of 4.79%.
arXiv Detail & Related papers (2023-06-02T01:07:14Z) - Deep Semi-supervised Learning with Double-Contrast of Features and
Semantics [2.2230089845369094]
This paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature.
We leverage information theory to explain the rationality of double contrast of semantics and features.
arXiv Detail & Related papers (2022-11-28T09:08:19Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.