Sequential Model Adaptation Using Domain Agnostic Internal Distributions
- URL: http://arxiv.org/abs/2007.00197v4
- Date: Wed, 23 Jun 2021 08:01:32 GMT
- Title: Sequential Model Adaptation Using Domain Agnostic Internal Distributions
- Authors: Mohammad Rostami, Aram Galstyan
- Abstract summary: We develop an algorithm for sequential adaptation of a classifier that is trained for a source domain to generalize in an unannotated target domain.
We consider that the model has been trained on the source domain annotated data and then it needs to be adapted using the target domain unannotated data when the source domain data is not accessible.
- Score: 31.3178953771424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an algorithm for sequential adaptation of a classifier that is
trained for a source domain to generalize in an unannotated target domain. We
consider that the model has been trained on the source domain annotated data
and then it needs to be adapted using the target domain unannotated data when
the source domain data is not accessible. We align the distributions of the
source and the target domains in a discriminative embedding space via an
intermediate internal distribution. This distribution is estimated using the
source data representations in the embedding. We conduct experiments on four
benchmarks to demonstrate the method is effective and compares favorably
against existing methods.
Related papers
- Online Continual Domain Adaptation for Semantic Image Segmentation Using
Internal Representations [28.549418215123936]
We develop an online UDA algorithm for semantic segmentation of images that improves model generalization on unannotated domains.
We evaluate our approach on well established semantic segmentation datasets and demonstrate it compares favorably against state-of-the-art (SOTA) semantic segmentation methods.
arXiv Detail & Related papers (2024-01-02T04:48:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Source-free Domain Adaptation via Distributional Alignment by Matching
Batch Normalization Statistics [85.75352990739154]
We propose a novel domain adaptation method for the source-free setting.
We use batch normalization statistics stored in the pretrained model to approximate the distribution of unobserved source data.
Our method achieves competitive performance with state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-01-19T14:22:33Z) - Unsupervised Model Adaptation for Continual Semantic Segmentation [15.820660013260584]
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
We provide theoretical analysis and explain conditions under which our algorithm is effective.
Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.
arXiv Detail & Related papers (2020-09-26T04:55:50Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.