Structured Consistency Loss for semi-supervised semantic segmentation
- URL: http://arxiv.org/abs/2001.04647v2
- Date: Mon, 22 Nov 2021 04:22:49 GMT
- Title: Structured Consistency Loss for semi-supervised semantic segmentation
- Authors: Jongmok Kim, Jooyoung Jang, Hyunwoo Park, SeongAh Jeong
- Abstract summary: The consistency loss has played a key role in solving problems in recent studies on semi-supervised learning.
We propose a structured consistency loss to address this limitation of extant studies.
We are the first to present the superiority of state-of-the-art semi-supervised learning in semantic segmentation.
- Score: 1.4146420810689415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The consistency loss has played a key role in solving problems in recent
studies on semi-supervised learning. Yet extant studies with the consistency
loss are limited to its application to classification tasks; extant studies on
semi-supervised semantic segmentation rely on pixel-wise classification, which
does not reflect the structured nature of characteristics in prediction. We
propose a structured consistency loss to address this limitation of extant
studies. Structured consistency loss promotes consistency in inter-pixel
similarity between teacher and student networks. Specifically, collaboration
with CutMix optimizes the efficient performance of semi-supervised semantic
segmentation with structured consistency loss by reducing computational burden
dramatically. The superiority of proposed method is verified with the
Cityscapes; The Cityscapes benchmark results with validation and with test data
are 81.9 mIoU and 83.84 mIoU respectively. This ranks the first place on the
pixel-level semantic labeling task of Cityscapes benchmark suite. To the best
of our knowledge, we are the first to present the superiority of
state-of-the-art semi-supervised learning in semantic segmentation.
Related papers
- SemSim: Revisiting Weak-to-Strong Consistency from a Semantic Similarity Perspective for Semi-supervised Medical Image Segmentation [18.223854197580145]
Semi-supervised learning (SSL) for medical image segmentation is a challenging yet highly practical task.
We propose a novel framework based on FixMatch, named SemSim, powered by two appealing designs from semantic similarity perspective.
We show that SemSim yields consistent improvements over the state-of-the-art methods across three public segmentation benchmarks.
arXiv Detail & Related papers (2024-10-17T12:31:37Z) - Affinity-Graph-Guided Contractive Learning for Pretext-Free Medical Image Segmentation with Minimal Annotation [55.325956390997]
This paper proposes an affinity-graph-guided semi-supervised contrastive learning framework (Semi-AGCL) for medical image segmentation.
The framework first designs an average-patch-entropy-driven inter-patch sampling method, which can provide a robust initial feature space.
With merely 10% of the complete annotation set, our model approaches the accuracy of the fully annotated baseline, manifesting a marginal deviation of only 2.52%.
arXiv Detail & Related papers (2024-10-14T10:44:47Z) - Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Semi-Supervised Confidence-Level-based Contrastive Discrimination for
Class-Imbalanced Semantic Segmentation [1.713291434132985]
We have proposed a semi-supervised contrastive learning framework for the task of class-imbalanced semantic segmentation.
Our proposed method can provide satisfactory segmentation results with even 3.5% labeled data.
arXiv Detail & Related papers (2022-11-28T04:58:27Z) - Region-level Contrastive and Consistency Learning for Semi-Supervised
Semantic Segmentation [30.1884540364192]
We propose a novel region-level contrastive and consistency learning framework (RC2L) for semi-supervised semantic segmentation.
Specifically, we first propose a Region Mask Contrastive (RMC) loss and a Region Feature Contrastive (RFC) loss to accomplish region-level contrastive property.
Based on the proposed region-level contrastive and consistency regularization, we develop a region-level contrastive and consistency learning framework (RC2L) for semi-supervised semantic segmentation.
arXiv Detail & Related papers (2022-04-28T07:22:47Z) - Contextual Model Aggregation for Fast and Robust Federated Learning in
Edge Computing [88.76112371510999]
Federated learning is a prime candidate for distributed machine learning at the network edge.
Existing algorithms face issues with slow convergence and/or robustness of performance.
We propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction.
arXiv Detail & Related papers (2022-03-23T21:42:31Z) - Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - A Weakly-Supervised Semantic Segmentation Approach based on the Centroid
Loss: Application to Quality Control and Inspection [6.101839518775968]
We propose and assess a new weakly-supervised semantic segmentation approach making use of a novel loss function.
The performance of the approach is evaluated against datasets from two different industry-related case studies.
arXiv Detail & Related papers (2020-10-26T09:08:21Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.