Semi-Supervised Confidence-Level-based Contrastive Discrimination for
Class-Imbalanced Semantic Segmentation
- URL: http://arxiv.org/abs/2211.15066v1
- Date: Mon, 28 Nov 2022 04:58:27 GMT
- Title: Semi-Supervised Confidence-Level-based Contrastive Discrimination for
Class-Imbalanced Semantic Segmentation
- Authors: Kangcheng Liu
- Abstract summary: We have proposed a semi-supervised contrastive learning framework for the task of class-imbalanced semantic segmentation.
Our proposed method can provide satisfactory segmentation results with even 3.5% labeled data.
- Score: 1.713291434132985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To overcome the data-hungry challenge, we have proposed a semi-supervised
contrastive learning framework for the task of class-imbalanced semantic
segmentation. First and foremost, to make the model operate in a
semi-supervised manner, we proposed the confidence-level-based contrastive
learning to achieve instance discrimination in an explicit manner, and make the
low-confidence low-quality features align with the high-confidence
counterparts. Moreover, to tackle the problem of class imbalance in crack
segmentation and road components extraction, we proposed the data imbalance
loss to replace the traditional cross entropy loss in pixel-level semantic
segmentation. Finally, we have also proposed an effective multi-stage fusion
network architecture to improve semantic segmentation performance. Extensive
experiments on the real industrial crack segmentation and the road segmentation
demonstrate the superior effectiveness of the proposed framework. Our proposed
method can provide satisfactory segmentation results with even merely 3.5%
labeled data.
Related papers
- Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - Guided Point Contrastive Learning for Semi-supervised Point Cloud
Semantic Segmentation [90.2445084743881]
We present a method for semi-supervised point cloud semantic segmentation to adopt unlabeled point clouds in training to boost the model performance.
Inspired by the recent contrastive loss in self-supervised tasks, we propose the guided point contrastive loss to enhance the feature representation and model generalization ability.
arXiv Detail & Related papers (2021-10-15T16:38:54Z) - Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation [48.294903659573585]
In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-08-03T07:48:33Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Semi-Supervised Segmentation of Concrete Aggregate Using Consensus
Regularisation and Prior Guidance [2.1749194587826026]
We propose a novel semi-supervised framework for semantic segmentation, introducing additional losses based on prior knowledge.
Experiments performed on our "concrete aggregate dataset" demonstrate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-04-22T13:01:28Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Self-paced and self-consistent co-training for semi-supervised image
segmentation [23.100800154116627]
Deep co-training has been proposed as an effective approach for image segmentation when annotated data is scarce.
In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method.
arXiv Detail & Related papers (2020-10-31T17:41:03Z) - Structured Consistency Loss for semi-supervised semantic segmentation [1.4146420810689415]
The consistency loss has played a key role in solving problems in recent studies on semi-supervised learning.
We propose a structured consistency loss to address this limitation of extant studies.
We are the first to present the superiority of state-of-the-art semi-supervised learning in semantic segmentation.
arXiv Detail & Related papers (2020-01-14T07:08:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.