TADA: Taxonomy Adaptive Domain Adaptation
- URL: http://arxiv.org/abs/2109.04813v1
- Date: Fri, 10 Sep 2021 11:58:56 GMT
- Title: TADA: Taxonomy Adaptive Domain Adaptation
- Authors: Rui Gong, Martin Danelljan, Dengxin Dai, Wenguan Wang, Danda Pani
Paudel, Ajad Chhatkuli, Fisher Yu, Luc Van Gool
- Abstract summary: Traditional domain adaptation addresses the task of adapting a model to a novel target domain under limited supervision.
We introduce the more general taxonomy adaptive domain adaptation problem, allowing for inconsistent between the two domains.
On the label-level, we employ a bilateral mixed sampling strategy to augment the target domain, and a relabelling method to unify and align the label spaces.
- Score: 143.68890984935726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional domain adaptation addresses the task of adapting a model to a
novel target domain under limited or no additional supervision. While tackling
the input domain gap, the standard domain adaptation settings assume no domain
change in the output space. In semantic prediction tasks, different datasets
are often labeled according to different semantic taxonomies. In many
real-world settings, the target domain task requires a different taxonomy than
the one imposed by the source domain. We therefore introduce the more general
taxonomy adaptive domain adaptation (TADA) problem, allowing for inconsistent
taxonomies between the two domains. We further propose an approach that jointly
addresses the image-level and label-level domain adaptation. On the
label-level, we employ a bilateral mixed sampling strategy to augment the
target domain, and a relabelling method to unify and align the label spaces. We
address the image-level domain gap by proposing an uncertainty-rectified
contrastive learning method, leading to more domain-invariant and class
discriminative features. We extensively evaluate the effectiveness of our
framework under different TADA settings: open taxonomy, coarse-to-fine
taxonomy, and partially-overlapping taxonomy. Our framework outperforms
previous state-of-the-art by a large margin, while capable of adapting to new
target domain taxonomies.
Related papers
- DynAlign: Unsupervised Dynamic Taxonomy Alignment for Cross-Domain Segmentation [15.303659468173334]
We introduce DynAlign, a framework that integrates UDA with foundation models to bridge the image-level and label-level domain gaps.
Our approach leverages prior semantic knowledge to align source categories with target categories that can be novel, more fine-grained, or named differently.
DynAlign generates accurate predictions in a new target label space without requiring any manual annotations.
arXiv Detail & Related papers (2025-01-27T18:57:19Z) - Cross-Domain Semantic Segmentation on Inconsistent Taxonomy using VLMs [1.4182672294839365]
Cross-Domain Semantic on Inconsistent taxonomy using Vision Language Models (CSI)
This paper introduces a novel approach, Cross-Domain Semantic on Inconsistent taxonomy using Vision Language Models (CSI)
It effectively performs domain-adaptive semantic segmentation even in situations of source-target class mismatches.
arXiv Detail & Related papers (2024-08-05T06:32:20Z) - Taxonomy-Structured Domain Adaptation [21.432546714330023]
We tackle a generalization with taxonomy-structured domains, which formalizes domains with nested, hierarchical similarity structures.
We build on the classic adversarial framework and introduce a novel taxonomist, which competes with the adversarial discriminator to preserve the taxonomy information.
arXiv Detail & Related papers (2023-06-13T16:04:14Z) - ToAlign: Task-oriented Alignment for Unsupervised Domain Adaptation [84.90801699807426]
We study what features should be aligned across domains and propose to make the domain alignment proactively serve classification.
We explicitly decompose a feature in the source domain intoa task-related/discriminative feature that should be aligned, and a task-irrelevant feature that should be avoided/ignored.
arXiv Detail & Related papers (2021-06-21T02:17:48Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Domain Adaptive Semantic Segmentation Using Weak Labels [115.16029641181669]
We propose a novel framework for domain adaptation in semantic segmentation with image-level weak labels in the target domain.
We develop a weak-label classification module to enforce the network to attend to certain categories.
In experiments, we show considerable improvements with respect to the existing state-of-the-arts in UDA and present a new benchmark in the WDA setting.
arXiv Detail & Related papers (2020-07-30T01:33:57Z) - Adversarial Network with Multiple Classifiers for Open Set Domain
Adaptation [9.251407403582501]
This paper focuses on the type of open set domain adaptation setting where the target domain has both private ('unknown classes') label space and the shared ('known classes') label space.
Prevalent distribution-matching domain adaptation methods are inadequate in such a setting.
We propose a novel adversarial domain adaptation model with multiple auxiliary classifiers.
arXiv Detail & Related papers (2020-07-01T11:23:07Z) - Unsupervised Domain Adaptation with Progressive Domain Augmentation [34.887690018011675]
We propose a novel unsupervised domain adaptation method based on progressive domain augmentation.
The proposed method generates virtual intermediate domains via domain, progressively augments the source domain and bridges the source-target domain divergence.
We conduct experiments on multiple domain adaptation tasks and the results shows the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-04-03T18:45:39Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.