Semi-Supervised Semantic Segmentation with Cross-Consistency Training
- URL: http://arxiv.org/abs/2003.09005v3
- Date: Tue, 9 Jun 2020 14:11:06 GMT
- Title: Semi-Supervised Semantic Segmentation with Cross-Consistency Training
- Authors: Yassine Ouali, C\'eline Hudelot, Myriam Tami
- Abstract summary: We present a novel cross-consistency based semi-supervised approach for semantic segmentation.
Our method achieves state-of-the-art results in several datasets.
- Score: 8.894935073145252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a novel cross-consistency based semi-supervised
approach for semantic segmentation. Consistency training has proven to be a
powerful semi-supervised learning framework for leveraging unlabeled data under
the cluster assumption, in which the decision boundary should lie in
low-density regions. In this work, we first observe that for semantic
segmentation, the low-density regions are more apparent within the hidden
representations than within the inputs. We thus propose cross-consistency
training, where an invariance of the predictions is enforced over different
perturbations applied to the outputs of the encoder. Concretely, a shared
encoder and a main decoder are trained in a supervised manner using the
available labeled examples. To leverage the unlabeled examples, we enforce a
consistency between the main decoder predictions and those of the auxiliary
decoders, taking as inputs different perturbed versions of the encoder's
output, and consequently, improving the encoder's representations. The proposed
method is simple and can easily be extended to use additional training signal,
such as image-level labels or pixel-level labels across different domains. We
perform an ablation study to tease apart the effectiveness of each component,
and conduct extensive experiments to demonstrate that our method achieves
state-of-the-art results in several datasets.
Related papers
- Enhancing Hyperspectral Image Prediction with Contrastive Learning in Low-Label Regime [0.810304644344495]
Self-supervised contrastive learning is an effective approach for addressing the challenge of limited labelled data.
We evaluate the method's performance for both the single-label and multi-label classification tasks.
arXiv Detail & Related papers (2024-10-10T10:20:16Z) - SemHint-MD: Learning from Noisy Semantic Labels for Self-Supervised
Monocular Depth Estimation [19.229255297016635]
Self-supervised depth estimation can be trapped in a local minimum due to the gradient-locality issue of the photometric loss.
We present a framework to enhance depth by leveraging semantic segmentation to guide the network to jump out of the local minimum.
arXiv Detail & Related papers (2023-03-31T17:20:27Z) - Neighbour Consistency Guided Pseudo-Label Refinement for Unsupervised
Person Re-Identification [80.98291772215154]
Unsupervised person re-identification (ReID) aims at learning discriminative identity features for person retrieval without any annotations.
Recent advances accomplish this task by leveraging clustering-based pseudo labels.
We propose a Neighbour Consistency guided Pseudo Label Refinement framework.
arXiv Detail & Related papers (2022-11-30T09:39:57Z) - Semi-supervised Semantic Segmentation with Prototype-based Consistency
Regularization [20.4183741427867]
Semi-supervised semantic segmentation requires the model to propagate the label information from limited annotated images to unlabeled ones.
A challenge for such a per-pixel prediction task is the large intra-class variation.
We propose a novel approach to regularize the distribution of within-class features to ease label propagation difficulty.
arXiv Detail & Related papers (2022-10-10T01:38:01Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - A Survey on Label-efficient Deep Segmentation: Bridging the Gap between
Weak Supervision and Dense Prediction [115.9169213834476]
This paper offers a comprehensive review on label-efficient segmentation methods.
We first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels.
Next, we summarize the existing label-efficient segmentation methods from a unified perspective.
arXiv Detail & Related papers (2022-07-04T06:21:01Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Semi-Supervised Segmentation of Concrete Aggregate Using Consensus
Regularisation and Prior Guidance [2.1749194587826026]
We propose a novel semi-supervised framework for semantic segmentation, introducing additional losses based on prior knowledge.
Experiments performed on our "concrete aggregate dataset" demonstrate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-04-22T13:01:28Z) - Semi-supervised Left Atrium Segmentation with Mutual Consistency
Training [60.59108570938163]
We propose a novel Mutual Consistency Network (MC-Net) for semi-supervised left atrium segmentation from 3D MR images.
Our MC-Net consists of one encoder and two slightly different decoders, and the prediction discrepancies of two decoders are transformed as an unsupervised loss.
We evaluate our MC-Net on the public Left Atrium (LA) database and it obtains impressive performance gains by exploiting the unlabeled data effectively.
arXiv Detail & Related papers (2021-03-04T09:34:32Z) - ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised
Learning [4.205692673448206]
We propose a novel data augmentation mechanism called ClassMix, which generates augmentations by mixing unlabelled samples.
We evaluate this augmentation technique on two common semi-supervised semantic segmentation benchmarks, showing that it attains state-of-the-art results.
arXiv Detail & Related papers (2020-07-15T18:21:17Z) - MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative
Adversarial Network [51.84251358009803]
We present a novel self-supervised learning approach for conditional generative adversarial networks (GANs) under a semi-supervised setting.
We perform augmentation by randomly sampling sensible labels from the label space of the few labelled examples available.
Our method surpasses the baseline with only 20% of the labelled examples used to train the baseline.
arXiv Detail & Related papers (2020-06-11T17:14:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.