Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation
- URL: http://arxiv.org/abs/2012.08278v1
- Date: Tue, 15 Dec 2020 13:21:54 GMT
- Title: Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation
- Authors: Rui Gong, Yuhua Chen, Danda Pani Paudel, Yawei Li, Ajad Chhatkuli, Wen
Li, Dengxin Dai, Luc Van Gool
- Abstract summary: We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
- Score: 102.42638795864178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open compound domain adaptation (OCDA) is a domain adaptation setting, where
target domain is modeled as a compound of multiple unknown homogeneous domains,
which brings the advantage of improved generalization to unseen domains. In
this work, we propose a principled meta-learning based approach to OCDA for
semantic segmentation, MOCDA, by modeling the unlabeled target domain
continuously. Our approach consists of four key steps. First, we cluster target
domain into multiple sub-target domains by image styles, extracted in an
unsupervised manner. Then, different sub-target domains are split into
independent branches, for which batch normalization parameters are learnt to
treat them independently. A meta-learner is thereafter deployed to learn to
fuse sub-target domain-specific predictions, conditioned upon the style code.
Meanwhile, we learn to online update the model by model-agnostic meta-learning
(MAML) algorithm, thus to further improve generalization. We validate the
benefits of our approach by extensive experiments on synthetic-to-real
knowledge transfer benchmark datasets, where we achieve the state-of-the-art
performance in both compound and open domains.
Related papers
- ML-BPM: Multi-teacher Learning with Bidirectional Photometric Mixing for
Open Compound Domain Adaptation in Semantic Segmentation [78.19743899703052]
Open compound domain adaptation (OCDA) considers the target domain as the compound of multiple unknown homogeneous.
We introduce a multi-teacher framework with bidirectional photometric mixing to adapt to every target subdomain.
We conduct an adaptive distillation to learn a student model and apply consistency regularization to improve the student generalization.
arXiv Detail & Related papers (2022-07-19T03:30:48Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Multi-Source Domain Adaptation with Collaborative Learning for Semantic
Segmentation [32.95273803359897]
Multi-source unsupervised domain adaptation(MSDA) aims at adapting models trained on multiple labeled source domains to an unlabeled target domain.
We propose a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation.
arXiv Detail & Related papers (2021-03-08T12:51:42Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.