Region-level Contrastive and Consistency Learning for Semi-Supervised
Semantic Segmentation
- URL: http://arxiv.org/abs/2204.13314v1
- Date: Thu, 28 Apr 2022 07:22:47 GMT
- Title: Region-level Contrastive and Consistency Learning for Semi-Supervised
Semantic Segmentation
- Authors: Jianrong Zhang, Tianyi Wu, Chuanghao Ding, Hongwei Zhao and Guodong
Guo
- Abstract summary: We propose a novel region-level contrastive and consistency learning framework (RC2L) for semi-supervised semantic segmentation.
Specifically, we first propose a Region Mask Contrastive (RMC) loss and a Region Feature Contrastive (RFC) loss to accomplish region-level contrastive property.
Based on the proposed region-level contrastive and consistency regularization, we develop a region-level contrastive and consistency learning framework (RC2L) for semi-supervised semantic segmentation.
- Score: 30.1884540364192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current semi-supervised semantic segmentation methods mainly focus on
designing pixel-level consistency and contrastive regularization. However,
pixel-level regularization is sensitive to noise from pixels with incorrect
predictions, and pixel-level contrastive regularization has memory and
computational cost with O(pixel_num^2). To address the issues, we propose a
novel region-level contrastive and consistency learning framework (RC^2L) for
semi-supervised semantic segmentation. Specifically, we first propose a Region
Mask Contrastive (RMC) loss and a Region Feature Contrastive (RFC) loss to
accomplish region-level contrastive property. Furthermore, Region Class
Consistency (RCC) loss and Semantic Mask Consistency (SMC) loss are proposed
for achieving region-level consistency. Based on the proposed region-level
contrastive and consistency regularization, we develop a region-level
contrastive and consistency learning framework (RC^2L) for semi-supervised
semantic segmentation, and evaluate our RC$^2$L on two challenging benchmarks
(PASCAL VOC 2012 and Cityscapes), outperforming the state-of-the-art.
Related papers
- Region-aware Distribution Contrast: A Novel Approach to Multi-Task Partially Supervised Learning [50.88504784466931]
Multi-task dense prediction involves semantic segmentation, depth estimation, and surface normal estimation.
Existing solutions typically rely on learning global image representations for global cross-task image matching.
Our proposal involves modeling region-wise representations using Gaussian Distributions.
arXiv Detail & Related papers (2024-03-15T12:41:30Z) - Progressive Feature Self-reinforcement for Weakly Supervised Semantic
Segmentation [55.69128107473125]
We propose a single-stage approach for Weakly Supervised Semantic (WSSS) with image-level labels.
We adaptively partition the image content into deterministic regions (e.g., confident foreground and background) and uncertain regions (e.g., object boundaries and misclassified categories) for separate processing.
Building upon this, we introduce a complementary self-enhancement method that constrains the semantic consistency between these confident regions and an augmented image with the same class labels.
arXiv Detail & Related papers (2023-12-14T13:21:52Z) - Associating Spatially-Consistent Grouping with Text-supervised Semantic
Segmentation [117.36746226803993]
We introduce self-supervised spatially-consistent grouping with text-supervised semantic segmentation.
Considering the part-like grouped results, we further adapt a text-supervised model from image-level to region-level recognition.
Our method achieves 59.2% mIoU and 32.4% mIoU on Pascal VOC and Pascal Context benchmarks.
arXiv Detail & Related papers (2023-04-03T16:24:39Z) - Dense Siamese Network [86.23741104851383]
We present Dense Siamese Network (DenseSiam), a simple unsupervised learning framework for dense prediction tasks.
It learns visual representations by maximizing the similarity between two views of one image with two types of consistency, i.e., pixel consistency and region consistency.
It surpasses state-of-the-art segmentation methods by 2.1 mIoU with 28% training costs.
arXiv Detail & Related papers (2022-03-21T15:55:23Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Domain Adaptive Semantic Segmentation with Regional Contrastive
Consistency Regularization [19.279884432843822]
We propose a novel and fully end-to-end trainable approach, called regional contrastive consistency regularization (RCCR) for domain adaptive semantic segmentation.
Our core idea is to pull the similar regional features extracted from the same location of different images to be closer, and meanwhile push the features from the different locations of the two images to be separated.
arXiv Detail & Related papers (2021-10-11T11:45:00Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Contextual-Relation Consistent Domain Adaptation for Semantic
Segmentation [44.19436340246248]
This paper presents an innovative local contextual-relation consistent domain adaptation technique.
It aims to achieve local-level consistencies during the global-level alignment.
Experiments demonstrate its superior segmentation performance as compared with state-of-the-art methods.
arXiv Detail & Related papers (2020-07-05T19:00:46Z) - Structured Consistency Loss for semi-supervised semantic segmentation [1.4146420810689415]
The consistency loss has played a key role in solving problems in recent studies on semi-supervised learning.
We propose a structured consistency loss to address this limitation of extant studies.
We are the first to present the superiority of state-of-the-art semi-supervised learning in semantic segmentation.
arXiv Detail & Related papers (2020-01-14T07:08:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.