VDM-DA: Virtual Domain Modeling for Source Data-free Domain Adaptation
- URL: http://arxiv.org/abs/2103.14357v1
- Date: Fri, 26 Mar 2021 09:56:40 GMT
- Title: VDM-DA: Virtual Domain Modeling for Source Data-free Domain Adaptation
- Authors: Jiayi Tian, Jing Zhang, Wen Li, Dong Xu
- Abstract summary: Domain adaptation aims to leverage a label-rich domain (the source domain) to help model learning in a label-scarce domain (the target domain)
Access to the source domain samples may not always be feasible in the real world applications due to different problems.
We propose a novel approach referred to as Virtual Domain Modeling (VDM-DA)
- Score: 26.959377850768423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation aims to leverage a label-rich domain (the source domain) to
help model learning in a label-scarce domain (the target domain). Most domain
adaptation methods require the co-existence of source and target domain samples
to reduce the distribution mismatch, however, access to the source domain
samples may not always be feasible in the real world applications due to
different problems (e.g., storage, transmission, and privacy issues). In this
work, we deal with the source data-free unsupervised domain adaptation problem,
and propose a novel approach referred to as Virtual Domain Modeling (VDM-DA).
The virtual domain acts as a bridge between the source and target domains. On
one hand, we generate virtual domain samples based on an approximated Gaussian
Mixture Model (GMM) in the feature space with the pre-trained source model,
such that the virtual domain maintains a similar distribution with the source
domain without accessing to the original source data. On the other hand, we
also design an effective distribution alignment method to reduce the
distribution divergence between the virtual domain and the target domain by
gradually improving the compactness of the target domain distribution through
model learning. In this way, we successfully achieve the goal of distribution
alignment between the source and target domains by training deep networks
without accessing to the source domain data. We conduct extensive experiments
on benchmark datasets for both 2D image-based and 3D point cloud-based
cross-domain object recognition tasks, where the proposed method referred to
Domain Adaptation with Virtual Domain Modeling (VDM-DA) achieves the
state-of-the-art performances on all datasets.
Related papers
- DomainVerse: A Benchmark Towards Real-World Distribution Shifts For
Tuning-Free Adaptive Domain Generalization [27.099706316752254]
We establish a novel dataset DomainVerse for Adaptive Domain Generalization (ADG)
Benefiting from the introduced hierarchical definition of domain shifts, DomainVerse consists of about 0.5 million images from 390 fine-grained realistic domains.
We propose two methods called Domain CLIP and Domain++ CLIP for tuning-free adaptive domain generalization.
arXiv Detail & Related papers (2024-03-05T07:10:25Z) - SIDE: Self-supervised Intermediate Domain Exploration for Source-free
Domain Adaptation [36.470026809824674]
Domain adaptation aims to alleviate the domain shift when transferring the knowledge learned from the source domain to the target domain.
Due to privacy issues, source-free domain adaptation (SFDA) has recently become very demanding yet challenging.
This paper proposes self-supervised intermediate domain exploration (SIDE) that effectively bridges the domain gap with an intermediate domain.
arXiv Detail & Related papers (2023-10-13T07:50:37Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Generalized Source-free Domain Adaptation [47.907168218249694]
We propose a new domain adaptation paradigm called Generalized Source-free Domain Adaptation (G-SFDA)
For target performance our method is on par with or better than existing DA and SFDA methods, specifically it achieves state-of-the-art performance (85.4%) on VisDA.
arXiv Detail & Related papers (2021-08-03T16:34:12Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Dynamic Transfer for Multi-Source Domain Adaptation [82.54405157719641]
We present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples.
It breaks down source domain barriers and turns multi-source domains into a single-source domain.
Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3%.
arXiv Detail & Related papers (2021-03-19T01:22:12Z) - Multi-Source Domain Adaptation with Collaborative Learning for Semantic
Segmentation [32.95273803359897]
Multi-source unsupervised domain adaptation(MSDA) aims at adapting models trained on multiple labeled source domains to an unlabeled target domain.
We propose a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation.
arXiv Detail & Related papers (2021-03-08T12:51:42Z) - Unsupervised Model Adaptation for Continual Semantic Segmentation [15.820660013260584]
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
We provide theoretical analysis and explain conditions under which our algorithm is effective.
Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.
arXiv Detail & Related papers (2020-09-26T04:55:50Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.