Self-paced and self-consistent co-training for semi-supervised image
segmentation
- URL: http://arxiv.org/abs/2011.00325v4
- Date: Wed, 10 Feb 2021 16:46:23 GMT
- Title: Self-paced and self-consistent co-training for semi-supervised image
segmentation
- Authors: Ping Wang, Jizong Peng, Marco Pedersoli, Yuanfeng Zhou, Caiming Zhang,
Christian Desrosiers
- Abstract summary: Deep co-training has been proposed as an effective approach for image segmentation when annotated data is scarce.
In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method.
- Score: 23.100800154116627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep co-training has recently been proposed as an effective approach for
image segmentation when annotated data is scarce. In this paper, we improve
existing approaches for semi-supervised segmentation with a self-paced and
self-consistent co-training method. To help distillate information from
unlabeled images, we first design a self-paced learning strategy for
co-training that lets jointly-trained neural networks focus on
easier-to-segment regions first, and then gradually consider harder ones.This
is achieved via an end-to-end differentiable loss inthe form of a generalized
Jensen Shannon Divergence(JSD). Moreover, to encourage predictions from
different networks to be both consistent and confident, we enhance this
generalized JSD loss with an uncertainty regularizer based on entropy. The
robustness of individual models is further improved using a self-ensembling
loss that enforces their prediction to be consistent across different training
iterations. We demonstrate the potential of our method on three challenging
image segmentation problems with different image modalities, using small
fraction of labeled data. Results show clear advantages in terms of performance
compared to the standard co-training baselines and recently proposed
state-of-the-art approaches for semi-supervised segmentation
Related papers
- Unsupervised Representation Learning by Balanced Self Attention Matching [2.3020018305241337]
We present a self-supervised method for embedding image features called BAM.
We obtain rich representations and avoid feature collapse by minimizing a loss that matches these distributions to their globally balanced and entropy regularized version.
We show competitive performance with leading methods on both semi-supervised and transfer-learning benchmarks.
arXiv Detail & Related papers (2024-08-04T12:52:44Z) - Self2Seg: Single-Image Self-Supervised Joint Segmentation and Denoising [5.980200432310941]
Self2Seg is a self-supervised method for the joint segmentation and denoising of a single image.
In contrast to data-driven methods, Self2Seg segments an image into meaningful regions without any training database.
arXiv Detail & Related papers (2023-09-19T10:47:32Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Dense Unsupervised Learning for Video Segmentation [49.46930315961636]
We present a novel approach to unsupervised learning for video object segmentation (VOS)
Unlike previous work, our formulation allows to learn dense feature representations directly in a fully convolutional regime.
Our approach exceeds the segmentation accuracy of previous work despite using significantly less training data and compute power.
arXiv Detail & Related papers (2021-11-11T15:15:11Z) - Mixed-supervised segmentation: Confidence maximization helps knowledge
distillation [24.892332859630518]
In this work, we propose a dual-branch architecture for deep neural networks.
The upper branch (teacher) receives strong annotations, while the bottom one (student) is driven by limited supervision and guided by the upper branch.
We show that the synergy between the entropy and KL divergence yields substantial improvements in performance.
arXiv Detail & Related papers (2021-09-21T20:06:13Z) - Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation [48.294903659573585]
In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-08-03T07:48:33Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Self-supervised Augmentation Consistency for Adapting Semantic
Segmentation [56.91850268635183]
We propose an approach to domain adaptation for semantic segmentation that is both practical and highly accurate.
We employ standard data augmentation techniques $-$ photometric noise, flipping and scaling $-$ and ensure consistency of the semantic predictions.
We achieve significant improvements of the state-of-the-art segmentation accuracy after adaptation, consistent both across different choices of the backbone architecture and adaptation scenarios.
arXiv Detail & Related papers (2021-04-30T21:32:40Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Two-phase Pseudo Label Densification for Self-training based Domain
Adaptation [93.03265290594278]
We propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD.
In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images.
In the second phase, we perform a confidence-based easy-hard classification.
To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss.
arXiv Detail & Related papers (2020-12-09T02:35:25Z) - Single-Stage Semantic Segmentation from Image Labels [25.041129173350104]
This work defines three desirable properties of a weakly supervised method.
We then develop a segmentation-based network model and a self-supervised training scheme to train for semantic masks.
Despite its simplicity, our method achieves results that are competitive with significantly more complex pipelines.
arXiv Detail & Related papers (2020-05-16T21:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.