Geometry-Aware Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2112.11041v1
- Date: Tue, 21 Dec 2021 08:45:42 GMT
- Title: Geometry-Aware Unsupervised Domain Adaptation
- Authors: You-Wei Luo, Chuan-Xian Ren and Zi-Ying Chen
- Abstract summary: Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain in the presence of dataset shift.
Most existing methods cannot address the domain alignment and class discrimination well, which may distort the intrinsic data structure for downstream tasks.
We propose a novel geometry-aware model to learn the transferability and discriminability simultaneously via nuclear norm optimization.
- Score: 12.298214579392129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge from the
labeled source domain to the unlabeled target domain in the presence of dataset
shift. Most existing methods cannot address the domain alignment and class
discrimination well, which may distort the intrinsic data structure for
downstream tasks (e.g., classification). To this end, we propose a novel
geometry-aware model to learn the transferability and discriminability
simultaneously via nuclear norm optimization. We introduce the domain coherence
and class orthogonality for UDA from the perspective of subspace geometry. The
domain coherence will ensure the model has a larger capacity for learning
separable representations, and class orthogonality will minimize the
correlation between clusters to alleviate the misalignment. So, they are
consistent and can benefit from each other. Besides, we provide a theoretical
insight into the norm-based learning literature in UDA, which ensures the
interpretability of our model. We show that the norms of domains and clusters
are expected to be larger and smaller to enhance the transferability and
discriminability, respectively. Extensive experimental results on standard UDA
datasets demonstrate the effectiveness of our theory and model.
Related papers
- Gradually Vanishing Gap in Prototypical Network for Unsupervised Domain Adaptation [32.58201185195226]
We propose an efficient UDA framework named Gradually Vanishing Gap in Prototypical Network (GVG-PN)
Our model achieves transfer learning from both global and local perspectives.
Experiments on several UDA benchmarks validated that the proposed GVG-PN can clearly outperform the SOTA models.
arXiv Detail & Related papers (2024-05-28T03:03:32Z) - Unsupervised Domain Adaptation Using Compact Internal Representations [23.871860648919593]
A technique for tackling unsupervised domain adaptation involves mapping data points from both the source and target domains into a shared embedding space.
We develop an additional technique which makes the internal distribution of the source domain more compact.
We demonstrate that by increasing the margins between data representations for different classes in the embedding space, we can improve the model performance for UDA.
arXiv Detail & Related papers (2024-01-14T05:53:33Z) - CAusal and collaborative proxy-tasKs lEarning for Semi-Supervised Domain
Adaptation [20.589323508870592]
Semi-supervised domain adaptation (SSDA) adapts a learner to a new domain by effectively utilizing source domain data and a few labeled target samples.
We show that the proposed model significantly outperforms SOTA methods in terms of effectiveness and generalisability on SSDA datasets.
arXiv Detail & Related papers (2023-03-30T16:48:28Z) - CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Relation Matters: Foreground-aware Graph-based Relational Reasoning for
Domain Adaptive Object Detection [81.07378219410182]
We propose a new and general framework for DomainD, named Foreground-aware Graph-based Reasoning (FGRR)
FGRR incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations.
Empirical results demonstrate that the proposed FGRR exceeds the state-of-the-art on four DomainD benchmarks.
arXiv Detail & Related papers (2022-06-06T05:12:48Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain
Adaptation using Structurally Regularized Deep Clustering [119.88565565454378]
Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain.
We propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one.
Our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings.
arXiv Detail & Related papers (2020-12-08T08:52:00Z) - Dual Mixup Regularized Learning for Adversarial Domain Adaptation [19.393393465837377]
We propose a dual mixup regularized learning (DMRL) method for unsupervised domain adaptation.
DMRL guides the classifier in enhancing consistent predictions in-between samples, and enriches the intrinsic structures of the latent space.
A series of empirical studies on four domain adaptation benchmarks demonstrate that our approach can achieve the state-of-the-art.
arXiv Detail & Related papers (2020-07-07T00:24:14Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.