The Last Mile to Supervised Performance: Semi-Supervised Domain Adaptation for Semantic Segmentation
- URL: http://arxiv.org/abs/2411.18728v1
- Date: Wed, 27 Nov 2024 20:07:42 GMT
- Title: The Last Mile to Supervised Performance: Semi-Supervised Domain Adaptation for Semantic Segmentation
- Authors: Daniel Morales-Brotons, Grigorios Chrysos, Stratis Tzoumas, Volkan Cevher,
- Abstract summary: We study the promising setting of Semi-Supervised Domain Adaptation (SSDA)
We propose a simple SSDA framework that combines consistency regularization, pixel contrastive learning, and self-training to effectively utilize a few target-domain labels.
Our method outperforms prior art in the popular GTA-to-Cityscapes benchmark and shows that as little as 50 target labels can suffice to achieve near-supervised performance.
- Score: 51.77968964691317
- License:
- Abstract: Supervised deep learning requires massive labeled datasets, but obtaining annotations is not always easy or possible, especially for dense tasks like semantic segmentation. To overcome this issue, numerous works explore Unsupervised Domain Adaptation (UDA), which uses a labeled dataset from another domain (source), or Semi-Supervised Learning (SSL), which trains on a partially labeled set. Despite the success of UDA and SSL, reaching supervised performance at a low annotation cost remains a notoriously elusive goal. To address this, we study the promising setting of Semi-Supervised Domain Adaptation (SSDA). We propose a simple SSDA framework that combines consistency regularization, pixel contrastive learning, and self-training to effectively utilize a few target-domain labels. Our method outperforms prior art in the popular GTA-to-Cityscapes benchmark and shows that as little as 50 target labels can suffice to achieve near-supervised performance. Additional results on Synthia-to-Cityscapes, GTA-to-BDD and Synthia-to-BDD further demonstrate the effectiveness and practical utility of the method. Lastly, we find that existing UDA and SSL methods are not well-suited for the SSDA setting and discuss design patterns to adapt them.
Related papers
- SS-ADA: A Semi-Supervised Active Domain Adaptation Framework for Semantic Segmentation [25.929173344653158]
We propose a novel semi-supervised active domain adaptation (SS-ADA) framework for semantic segmentation.
SS-ADA integrates active learning into semi-supervised semantic segmentation to achieve the accuracy of supervised learning.
We conducted extensive experiments on synthetic-to-real and real-to-real domain adaptation settings.
arXiv Detail & Related papers (2024-06-17T13:40:42Z) - ECAP: Extensive Cut-and-Paste Augmentation for Unsupervised Domain
Adaptive Semantic Segmentation [4.082799056366928]
We propose an extensive cut-and-paste strategy (ECAP) to leverage reliable pseudo-labels through data augmentation.
ECAP maintains a memory bank of pseudo-labeled target samples throughout training and cut-and-pastes the most confident ones onto the current training batch.
We implement ECAP on top of the recent method MIC and boost its performance on two synthetic-to-real domain adaptation benchmarks.
arXiv Detail & Related papers (2024-03-06T17:06:07Z) - SAM4UDASS: When SAM Meets Unsupervised Domain Adaptive Semantic
Segmentation in Intelligent Vehicles [27.405213492173186]
We introduce SAM4UDASS, a novel approach that incorporates the Segment Anything Model (SAM) into self-training UDA methods for refining pseudo-labels.
It involves Semantic-Guided Mask Labeling, which assigns semantic labels to unlabeled SAM masks using UDA pseudo-labels.
It brings more than 3% mIoU gains on GTA5-to-Cityscapes, SYNTHIA-to-Cityscapes, and Cityscapes-to-ACDC when using DAFormer and achieves SOTA when using MIC.
arXiv Detail & Related papers (2023-11-22T08:29:45Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - Bi-level Alignment for Cross-Domain Crowd Counting [113.78303285148041]
Current methods rely on external data for training an auxiliary task or apply an expensive coarse-to-fine estimation.
We develop a new adversarial learning based method, which is simple and efficient to apply.
We evaluate our approach on five real-world crowd counting benchmarks, where we outperform existing approaches by a large margin.
arXiv Detail & Related papers (2022-05-12T02:23:25Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - A Simple Baseline for Semi-supervised Semantic Segmentation with Strong
Data Augmentation [74.8791451327354]
We propose a simple yet effective semi-supervised learning framework for semantic segmentation.
A set of simple design and training techniques can collectively improve the performance of semi-supervised semantic segmentation significantly.
Our method achieves state-of-the-art results in the semi-supervised settings on the Cityscapes and Pascal VOC datasets.
arXiv Detail & Related papers (2021-04-15T06:01:39Z) - Semi-supervised Domain Adaptation based on Dual-level Domain Mixing for
Semantic Segmentation [34.790169990156684]
We focus on a more practical setting of semi-supervised domain adaptation (SSDA) where both a small set of labeled target data and large amounts of labeled source data are available.
Two kinds of data mixing methods are proposed to reduce domain gap in both region-level and sample-level respectively.
We can obtain two complementary domain-mixed teachers based on dual-level mixed data from holistic and partial views respectively.
arXiv Detail & Related papers (2021-03-08T12:33:17Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.