Generalized Domain Adaptation
- URL: http://arxiv.org/abs/2106.01656v1
- Date: Thu, 3 Jun 2021 07:55:18 GMT
- Title: Generalized Domain Adaptation
- Authors: Yu Mitsuzumi, Go Irie, Daiki Ikami and Takashi Shibata
- Abstract summary: We give a general representation of UDA problems, named Generalized Domain Adaptation (GDA)
GDA covers the major variants as special cases, which allows us to organize them in a comprehensive framework.
We propose a novel approach to the new setting, which is self-supervised class-destructive learning.
- Score: 16.36451405054308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many variants of unsupervised domain adaptation (UDA) problems have been
proposed and solved individually. Its side effect is that a method that works
for one variant is often ineffective for or not even applicable to another,
which has prevented practical applications. In this paper, we give a general
representation of UDA problems, named Generalized Domain Adaptation (GDA). GDA
covers the major variants as special cases, which allows us to organize them in
a comprehensive framework. Moreover, this generalization leads to a new
challenging setting where existing methods fail, such as when domain labels are
unknown, and class labels are only partially given to each domain. We propose a
novel approach to the new setting. The key to our approach is self-supervised
class-destructive learning, which enables the learning of class-invariant
representations and domain-adversarial classifiers without using any domain
labels. Extensive experiments using three benchmark datasets demonstrate that
our method outperforms the state-of-the-art UDA methods in the new setting and
that it is competitive in existing UDA variations as well.
Related papers
- Enhancing Domain Adaptation through Prompt Gradient Alignment [16.618313165111793]
We develop a line of works based on prompt learning to learn both domain-invariant and specific features.
We cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss.
Our method consistently surpasses other prompt-based baselines by a large margin on different UDA benchmarks.
arXiv Detail & Related papers (2024-06-13T17:40:15Z) - Open-Set Domain Adaptation for Semantic Segmentation [6.3951361316638815]
We introduce Open-Set Domain Adaptation for Semantic (OSDA-SS) for the first time, where the target domain includes unknown classes.
To address these issues, we propose Boundary and Unknown Shape-Aware open-set domain adaptation, coined BUS.
Our BUS can accurately discern the boundaries between known and unknown classes in a contrastive manner using a novel dilation-erosion-based contrastive loss.
arXiv Detail & Related papers (2024-05-30T09:55:19Z) - MLNet: Mutual Learning Network with Neighborhood Invariance for
Universal Domain Adaptation [70.62860473259444]
Universal domain adaptation (UniDA) is a practical but challenging problem.
Existing UniDA methods may suffer from the problems of overlooking intra-domain variations in the target domain.
We propose a novel Mutual Learning Network (MLNet) with neighborhood invariance for UniDA.
arXiv Detail & Related papers (2023-12-13T03:17:34Z) - UFDA: Universal Federated Domain Adaptation with Practical Assumptions [33.06684706053823]
This paper studies a more practical scenario named Universal Federated Domain Adaptation (UFDA)
It only requires the black-box model and the label set information of each source domain.
We propose a corresponding methodology called Hot-Learning with Contrastive Label Disambiguation (HCLD)
arXiv Detail & Related papers (2023-11-27T06:38:07Z) - Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Open Set Domain Adaptation By Novel Class Discovery [118.25447367755737]
In Open Set Domain Adaptation (OSDA), large amounts of target samples are drawn from the implicit categories that never appear in the source domain.
We propose Self-supervised Class-Discovering Adapter that attempts to achieve OSDA by gradually discovering those implicit classes.
arXiv Detail & Related papers (2022-03-07T12:16:46Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - A New Bidirectional Unsupervised Domain Adaptation Segmentation
Framework [27.13101555533594]
unsupervised domain adaptation (UDA) techniques are proposed to bridge the gap between different domains.
In this paper, we propose a bidirectional UDA framework based on disentangled representation learning for equally competent two-way UDA performances.
arXiv Detail & Related papers (2021-08-18T05:25:11Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.