Focus on Your Target: A Dual Teacher-Student Framework for
Domain-adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2303.09083v1
- Date: Thu, 16 Mar 2023 05:04:10 GMT
- Title: Focus on Your Target: A Dual Teacher-Student Framework for
Domain-adaptive Semantic Segmentation
- Authors: Xinyue Huo, Lingxi Xie, Wengang Zhou, Houqiang Li, Qi Tian
- Abstract summary: We study unsupervised domain adaptation (UDA) for semantic segmentation.
We find that, by decreasing/increasing the proportion of training samples from the target domain, the 'learning ability' is strengthened/weakened.
We propose a novel dual teacher-student (DTS) framework and equip it with a bidirectional learning strategy.
- Score: 210.46684938698485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study unsupervised domain adaptation (UDA) for semantic segmentation.
Currently, a popular UDA framework lies in self-training which endows the model
with two-fold abilities: (i) learning reliable semantics from the labeled
images in the source domain, and (ii) adapting to the target domain via
generating pseudo labels on the unlabeled images. We find that, by
decreasing/increasing the proportion of training samples from the target
domain, the 'learning ability' is strengthened/weakened while the 'adapting
ability' goes in the opposite direction, implying a conflict between these two
abilities, especially for a single model. To alleviate the issue, we propose a
novel dual teacher-student (DTS) framework and equip it with a bidirectional
learning strategy. By increasing the proportion of target-domain data, the
second teacher-student model learns to 'Focus on Your Target' while the first
model is not affected. DTS is easily plugged into existing self-training
approaches. In a standard UDA scenario (training on synthetic, labeled data and
real, unlabeled data), DTS shows consistent gains over the baselines and sets
new state-of-the-art results of 76.5\% and 75.1\% mIoUs on
GTAv$\rightarrow$Cityscapes and SYNTHIA$\rightarrow$Cityscapes, respectively.
Related papers
- SiamSeg: Self-Training with Contrastive Learning for Unsupervised Domain Adaptation Semantic Segmentation in Remote Sensing [14.007392647145448]
UDA enables models to learn from unlabeled target domain data while training on labeled source domain data.
We propose integrating contrastive learning into UDA, enhancing the model's capacity to capture semantic information.
Our SimSeg method outperforms existing approaches, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-10-17T11:59:39Z) - Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation [80.1412989006262]
Domain adaptive semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose T2S-DA, which we interpret as a form of pulling Target to Source for Domain Adaptation.
arXiv Detail & Related papers (2023-05-23T07:09:09Z) - Contrastive Mean Teacher for Domain Adaptive Object Detectors [20.06919799819326]
Mean-teacher self-training is a powerful paradigm in unsupervised domain adaptation for object detection, but it struggles with low-quality pseudo-labels.
We propose Contrastive Mean Teacher (CMT) -- a unified, general-purpose framework with the two paradigms naturally integrated to maximize beneficial learning signals.
CMT leads to new state-of-the-art target-domain performance: 51.9% mAP on Foggy Cityscapes, outperforming the previously best by 2.1% mAP.
arXiv Detail & Related papers (2023-05-04T17:55:17Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Teacher-Student Consistency For Multi-Source Domain Adaptation [28.576613317253035]
In Multi-Source Domain Adaptation (MSDA), models are trained on samples from multiple source domains and used for inference on a different, target, domain.
We propose Multi-source Student Teacher (MUST), a novel procedure designed to alleviate these issues.
arXiv Detail & Related papers (2020-10-20T06:17:40Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.