Increasing Model Generalizability for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2209.14644v1
- Date: Thu, 29 Sep 2022 09:08:04 GMT
- Title: Increasing Model Generalizability for Unsupervised Domain Adaptation
- Authors: Mohammad Rostami
- Abstract summary: We show that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance.
We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets.
- Score: 12.013345715187285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A dominant approach for addressing unsupervised domain adaptation is to map
data points for the source and the target domains into an embedding space which
is modeled as the output-space of a shared deep encoder. The encoder is trained
to make the embedding space domain-agnostic to make a source-trained classifier
generalizable on the target domain. A secondary mechanism to improve UDA
performance further is to make the source domain distribution more compact to
improve model generalizability. We demonstrate that increasing the interclass
margins in the embedding space can help to develop a UDA algorithm with
improved performance. We estimate the internally learned multi-modal
distribution for the source domain, learned as a result of pretraining, and use
it to increase the interclass class separation in the source domain to reduce
the effect of domain shift. We demonstrate that using our approach leads to
improved model generalizability on four standard benchmark UDA image
classification datasets and compares favorably against exiting methods.
Related papers
- Unsupervised Domain Adaptation Using Compact Internal Representations [23.871860648919593]
A technique for tackling unsupervised domain adaptation involves mapping data points from both the source and target domains into a shared embedding space.
We develop an additional technique which makes the internal distribution of the source domain more compact.
We demonstrate that by increasing the margins between data representations for different classes in the embedding space, we can improve the model performance for UDA.
arXiv Detail & Related papers (2024-01-14T05:53:33Z) - Unsupervised Domain Adaptation via Domain-Adaptive Diffusion [31.802163238282343]
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain.
Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task.
Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
arXiv Detail & Related papers (2023-08-26T14:28:18Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation [26.929772844572213]
We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain.
We train the source-dominant model and the target-dominant model that have complementary characteristics.
Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain.
arXiv Detail & Related papers (2020-11-18T11:58:19Z) - Unsupervised Model Adaptation for Continual Semantic Segmentation [15.820660013260584]
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
We provide theoretical analysis and explain conditions under which our algorithm is effective.
Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.
arXiv Detail & Related papers (2020-09-26T04:55:50Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.