UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation
- URL: http://arxiv.org/abs/2401.12596v2
- Date: Fri, 15 Mar 2024 07:44:00 GMT
- Title: UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation
- Authors: Hengjia Li, Yang Liu, Yuqi Lin, Zhanwei Zhang, Yibo Zhao, weihang Pan, Tu Zheng, Zheng Yang, Yuchun Jiang, Boxi Wu, Deng Cai,
- Abstract summary: We propose UniHDA, a framework for generative hybrid domain adaptation with multi-modal references from multiple domains.
Our framework is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and Diffusion Models.
- Score: 22.003900281544766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, generative domain adaptation has achieved remarkable progress, enabling us to adapt a pre-trained generator to a new target domain. However, existing methods simply adapt the generator to a single target domain and are limited to a single modality, either text-driven or image-driven. Moreover, they cannot maintain well consistency with the source domain, which impedes the inheritance of the diversity. In this paper, we propose UniHDA, a \textbf{unified} and \textbf{versatile} framework for generative hybrid domain adaptation with multi-modal references from multiple domains. We use CLIP encoder to project multi-modal references into a unified embedding space and then linearly interpolate the direction vectors from multiple target domains to achieve hybrid domain adaptation. To ensure \textbf{consistency} with the source domain, we propose a novel cross-domain spatial structure (CSS) loss that maintains detailed spatial structure information between source and target generator. Experiments show that the adapted generator can synthesise realistic images with various attribute compositions. Additionally, our framework is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and Diffusion Models.
Related papers
- Few-shot Hybrid Domain Adaptation of Image Generators [14.779903669510846]
Few-shot Hybrid Domain Adaptation aims to acquire an adapted generator that preserves the integrated attributes of all target domains.
We introduce a discriminator-free framework that directly encodes different domains' images into well-separable subspaces.
Experiments show that our method can obtain numerous domain-specific attributes in a single adapted generator.
arXiv Detail & Related papers (2023-10-30T09:35:43Z) - Domain Re-Modulation for Few-Shot Generative Domain Adaptation [71.47730150327818]
Generative Domain Adaptation (GDA) involves transferring a pre-trained generator from one domain to a new domain using only a few reference images.
Inspired by the way human brains acquire knowledge in new domains, we present an innovative generator structure called Domain Re-Modulation (DoRM)
DoRM not only meets the criteria of high quality, large synthesis diversity, and cross-domain consistency, but also incorporates memory and domain association.
arXiv Detail & Related papers (2023-02-06T03:55:35Z) - Generalized One-shot Domain Adaption of Generative Adversarial Networks [72.84435077616135]
The adaption of Generative Adversarial Network (GAN) aims to transfer a pre-trained GAN to a given domain with limited training data.
We consider that the adaptation from source domain to target domain can be decoupled into two parts: the transfer of global style like texture and color, and the emergence of new entities that do not belong to the source domain.
Our core objective is to constrain the gap between the internal distributions of the reference and syntheses by sliced Wasserstein distance.
arXiv Detail & Related papers (2022-09-08T09:24:44Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Dynamic Transfer for Multi-Source Domain Adaptation [82.54405157719641]
We present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples.
It breaks down source domain barriers and turns multi-source domains into a single-source domain.
Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3%.
arXiv Detail & Related papers (2021-03-19T01:22:12Z) - Multi-Domain Level Generation and Blending with Sketches via
Example-Driven BSP and Variational Autoencoders [3.5234963231260177]
We present a PCGML approach for level generation that is able to recombine, adapt, and reuse structural patterns.
We show that our approach is able to blend domains together while retaining structural components.
arXiv Detail & Related papers (2020-06-17T12:21:22Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.