Cooperative Self-Training for Multi-Target Adaptive Semantic
Segmentation
- URL: http://arxiv.org/abs/2210.01578v1
- Date: Tue, 4 Oct 2022 13:03:17 GMT
- Title: Cooperative Self-Training for Multi-Target Adaptive Semantic
Segmentation
- Authors: Yangsong Zhang, Subhankar Roy, Hongtao Lu, Elisa Ricci, St\'ephane
Lathuili\`ere
- Abstract summary: We propose a self-training strategy that employs pseudo-labels to induce cooperation among multiple domain-specific classifiers.
We employ feature stylization as an efficient way to generate image views that forms an integral part of self-training.
- Score: 26.79776306494929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we address multi-target domain adaptation (MTDA) in semantic
segmentation, which consists in adapting a single model from an annotated
source dataset to multiple unannotated target datasets that differ in their
underlying data distributions. To address MTDA, we propose a self-training
strategy that employs pseudo-labels to induce cooperation among multiple
domain-specific classifiers. We employ feature stylization as an efficient way
to generate image views that forms an integral part of self-training.
Additionally, to prevent the network from overfitting to noisy pseudo-labels,
we devise a rectification strategy that leverages the predictions from
different classifiers to estimate the quality of pseudo-labels. Our extensive
experiments on numerous settings, based on four different semantic segmentation
datasets, validate the effectiveness of the proposed self-training strategy and
show that our method outperforms state-of-the-art MTDA approaches. Code
available at: https://github.com/Mael-zys/CoaST
Related papers
- IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based
Weighting for Semantic Segmentation [2.127049691404299]
This paper describes a method of domain adaptive training for semantic segmentation using multiple source datasets.
We propose a soft pseudo-label generation method by integrating predicted object probabilities from multiple source models.
arXiv Detail & Related papers (2023-03-02T05:20:36Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - Adapting Segmentation Networks to New Domains by Disentangling Latent
Representations [14.050836886292869]
Domain adaptation approaches have come into play to transfer knowledge acquired on a label-abundant source domain to a related label-scarce target domain.
We propose a novel performance metric to capture the relative efficacy of an adaptation strategy compared to supervised training.
arXiv Detail & Related papers (2021-08-06T09:43:07Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Multi-Source Domain Adaptation with Collaborative Learning for Semantic
Segmentation [32.95273803359897]
Multi-source unsupervised domain adaptation(MSDA) aims at adapting models trained on multiple labeled source domains to an unlabeled target domain.
We propose a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation.
arXiv Detail & Related papers (2021-03-08T12:51:42Z) - Learn by Guessing: Multi-Step Pseudo-Label Refinement for Person
Re-Identification [0.0]
A promising approach relies on the use of unsupervised learning as part of the pipeline.
In this work, we propose a multi-step pseudo-label refinement method to select the best possible clusters.
We surpass state-of-the-art for UDA Re-ID by 3.4% on Market1501-DukeMTMC datasets.
arXiv Detail & Related papers (2021-01-04T20:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.