Unified Optimal Transport Framework for Universal Domain Adaptation
- URL: http://arxiv.org/abs/2210.17067v1
- Date: Mon, 31 Oct 2022 05:07:09 GMT
- Title: Unified Optimal Transport Framework for Universal Domain Adaptation
- Authors: Wanxing Chang, Ye Shi, Hoang Duong Tuan, Jingya Wang
- Abstract summary: Universal Domain Adaptation (UniDA) aims to transfer knowledge from a source domain to a target domain without any constraints on label sets.
Most existing methods require manually specified or hand-tuned threshold values to detect common samples.
We propose to use Optimal Transport (OT) to handle these issues under a unified framework, namely UniOT.
- Score: 27.860165056943796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universal Domain Adaptation (UniDA) aims to transfer knowledge from a source
domain to a target domain without any constraints on label sets. Since both
domains may hold private classes, identifying target common samples for domain
alignment is an essential issue in UniDA. Most existing methods require
manually specified or hand-tuned threshold values to detect common samples thus
they are hard to extend to more realistic UniDA because of the diverse ratios
of common classes. Moreover, they cannot recognize different categories among
target-private samples as these private samples are treated as a whole. In this
paper, we propose to use Optimal Transport (OT) to handle these issues under a
unified framework, namely UniOT. First, an OT-based partial alignment with
adaptive filling is designed to detect common classes without any predefined
threshold values for realistic UniDA. It can automatically discover the
intrinsic difference between common and private classes based on the
statistical information of the assignment matrix obtained from OT. Second, we
propose an OT-based target representation learning that encourages both global
discrimination and local consistency of samples to avoid the over-reliance on
the source. Notably, UniOT is the first method with the capability to
automatically discover and recognize private categories in the target domain
for UniDA. Accordingly, we introduce a new metric H^3-score to evaluate the
performance in terms of both accuracy of common samples and clustering
performance of private ones. Extensive experiments clearly demonstrate the
advantages of UniOT over a wide range of state-of-the-art methods in UniDA.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - Universal Semi-Supervised Domain Adaptation by Mitigating Common-Class Bias [16.4249819402209]
We introduce Universal Semi-Supervised Domain Adaptation (UniSSDA)
UniSSDA is at the intersection of Universal Domain Adaptation (UniDA) and Semi-Supervised Domain Adaptation (SSDA)
We propose a new prior-guided pseudo-label refinement strategy to reduce the reinforcement of common-class bias due to pseudo-labeling.
arXiv Detail & Related papers (2024-03-17T14:43:47Z) - Upcycling Models under Domain and Category Shift [95.22147885947732]
We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
arXiv Detail & Related papers (2023-03-13T13:44:04Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.