Unsupervised Domain Adaptation Using Compact Internal Representations
- URL: http://arxiv.org/abs/2401.07207v1
- Date: Sun, 14 Jan 2024 05:53:33 GMT
- Title: Unsupervised Domain Adaptation Using Compact Internal Representations
- Authors: Mohammad Rostami
- Abstract summary: A technique for tackling unsupervised domain adaptation involves mapping data points from both the source and target domains into a shared embedding space.
We develop an additional technique which makes the internal distribution of the source domain more compact.
We demonstrate that by increasing the margins between data representations for different classes in the embedding space, we can improve the model performance for UDA.
- Score: 23.871860648919593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A major technique for tackling unsupervised domain adaptation involves
mapping data points from both the source and target domains into a shared
embedding space. The mapping encoder to the embedding space is trained such
that the embedding space becomes domain agnostic, allowing a classifier trained
on the source domain to generalize well on the target domain. To further
enhance the performance of unsupervised domain adaptation (UDA), we develop an
additional technique which makes the internal distribution of the source domain
more compact, thereby improving the model's ability to generalize in the target
domain.We demonstrate that by increasing the margins between data
representations for different classes in the embedding space, we can improve
the model performance for UDA. To make the internal representation more
compact, we estimate the internally learned multi-modal distribution of the
source domain as Gaussian mixture model (GMM). Utilizing the estimated GMM, we
enhance the separation between different classes in the source domain, thereby
mitigating the effects of domain shift. We offer theoretical analysis to
support outperofrmance of our method. To evaluate the effectiveness of our
approach, we conduct experiments on widely used UDA benchmark UDA datasets. The
results indicate that our method enhances model generalizability and
outperforms existing techniques.
Related papers
- Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Domain-aware Triplet loss in Domain Generalization [0.0]
Domain shift is caused by discrepancies in the distributions of the testing and training data.
We design a domainaware triplet loss for domain generalization to help the model to cluster similar semantic features.
Our algorithm is designed to disperse domain information in the embedding space.
arXiv Detail & Related papers (2023-03-01T14:02:01Z) - Increasing Model Generalizability for Unsupervised Domain Adaptation [12.013345715187285]
We show that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance.
We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets.
arXiv Detail & Related papers (2022-09-29T09:08:04Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Implicit Semantic Augmentation for Distance Metric Learning in Domain
Generalization [25.792285194055797]
Domain generalization (DG) aims to learn a model on one or more different but related source domains that could be generalized into an unseen target domain.
Existing DG methods try to prompt the diversity of source domains for the model's generalization ability.
This work applies the implicit semantic augmentation in feature space to capture the diversity of source domains.
arXiv Detail & Related papers (2022-08-02T11:37:23Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation [26.929772844572213]
We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain.
We train the source-dominant model and the target-dominant model that have complementary characteristics.
Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain.
arXiv Detail & Related papers (2020-11-18T11:58:19Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.