Few-shot Hybrid Domain Adaptation of Image Generators
- URL: http://arxiv.org/abs/2310.19378v2
- Date: Wed, 6 Dec 2023 09:30:09 GMT
- Title: Few-shot Hybrid Domain Adaptation of Image Generators
- Authors: Hengjia Li, Yang Liu, Linxuan Xia, Yuqi Lin, Tu Zheng, Zheng Yang,
Wenxiao Wang, Xiaohui Zhong, Xiaobo Ren, Xiaofei He
- Abstract summary: Few-shot Hybrid Domain Adaptation aims to acquire an adapted generator that preserves the integrated attributes of all target domains.
We introduce a discriminator-free framework that directly encodes different domains' images into well-separable subspaces.
Experiments show that our method can obtain numerous domain-specific attributes in a single adapted generator.
- Score: 14.779903669510846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Can a pre-trained generator be adapted to the hybrid of multiple target
domains and generate images with integrated attributes of them? In this work,
we introduce a new task -- Few-shot Hybrid Domain Adaptation (HDA). Given a
source generator and several target domains, HDA aims to acquire an adapted
generator that preserves the integrated attributes of all target domains,
without overriding the source domain's characteristics. Compared with Domain
Adaptation (DA), HDA offers greater flexibility and versatility to adapt
generators to more composite and expansive domains. Simultaneously, HDA also
presents more challenges than DA as we have access only to images from
individual target domains and lack authentic images from the hybrid domain. To
address this issue, we introduce a discriminator-free framework that directly
encodes different domains' images into well-separable subspaces. To achieve
HDA, we propose a novel directional subspace loss comprised of a distance loss
and a direction loss. Concretely, the distance loss blends the attributes of
all target domains by reducing the distances from generated images to all
target subspaces. The direction loss preserves the characteristics from the
source domain by guiding the adaptation along the perpendicular to subspaces.
Experiments show that our method can obtain numerous domain-specific attributes
in a single adapted generator, which surpasses the baseline methods in semantic
similarity, image fidelity, and cross-domain consistency.
Related papers
- Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation [22.003900281544766]
We propose UniHDA, a framework for generative hybrid domain adaptation with multi-modal references from multiple domains.
Our framework is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and Diffusion Models.
arXiv Detail & Related papers (2024-01-23T09:49:24Z) - Single Domain Dynamic Generalization for Iris Presentation Attack
Detection [41.126916126040655]
Iris presentation generalization has achieved great success under intra-domain settings but easily degrades on unseen domains.
We propose a Single Domain Dynamic Generalization (SDDG) framework, which exploits domain-invariant and domain-specific features on a per-sample basis.
The proposed method is effective and outperforms the state-of-the-art on LivDet-Iris 2017 dataset.
arXiv Detail & Related papers (2023-05-22T07:54:13Z) - Domain Re-Modulation for Few-Shot Generative Domain Adaptation [71.47730150327818]
Generative Domain Adaptation (GDA) involves transferring a pre-trained generator from one domain to a new domain using only a few reference images.
Inspired by the way human brains acquire knowledge in new domains, we present an innovative generator structure called Domain Re-Modulation (DoRM)
DoRM not only meets the criteria of high quality, large synthesis diversity, and cross-domain consistency, but also incorporates memory and domain association.
arXiv Detail & Related papers (2023-02-06T03:55:35Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation [82.52514546441247]
We propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-04-19T05:07:22Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.