HyperDomainNet: Universal Domain Adaptation for Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2210.08884v4
- Date: Thu, 30 Mar 2023 15:15:11 GMT
- Title: HyperDomainNet: Universal Domain Adaptation for Generative Adversarial
Networks
- Authors: Aibek Alanov, Vadim Titov, Dmitry Vetrov
- Abstract summary: We introduce a novel domain-modulation technique that allows to optimize only 6 thousand-dimensional vector instead of 30 million weights of StyleGAN2 to adapt to a target domain.
Inspired by the reduction in the size of the optimizing parameter space we consider the problem of multi-domain adaptation of GANs.
We propose the HyperDomainNet that is a hypernetwork that predicts our parameterization given the target domain.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation framework of GANs has achieved great progress in recent
years as a main successful approach of training contemporary GANs in the case
of very limited training data. In this work, we significantly improve this
framework by proposing an extremely compact parameter space for fine-tuning the
generator. We introduce a novel domain-modulation technique that allows to
optimize only 6 thousand-dimensional vector instead of 30 million weights of
StyleGAN2 to adapt to a target domain. We apply this parameterization to the
state-of-art domain adaptation methods and show that it has almost the same
expressiveness as the full parameter space. Additionally, we propose a new
regularization loss that considerably enhances the diversity of the fine-tuned
generator. Inspired by the reduction in the size of the optimizing parameter
space we consider the problem of multi-domain adaptation of GANs, i.e. setting
when the same model can adapt to several domains depending on the input query.
We propose the HyperDomainNet that is a hypernetwork that predicts our
parameterization given the target domain. We empirically confirm that it can
successfully learn a number of domains at once and may even generalize to
unseen domains. Source code can be found at
https://github.com/MACderRu/HyperDomainNet
Related papers
- Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation [40.667166043101076]
We propose a small adapter for rectifying diverse target domain styles to the source domain.
The adapter is trained to rectify the image features from diverse synthesized target domains to align with the source domain.
Our method achieves promising results on cross-domain few-shot semantic segmentation tasks.
arXiv Detail & Related papers (2024-04-16T07:07:40Z) - UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation [22.003900281544766]
We propose UniHDA, a framework for generative hybrid domain adaptation with multi-modal references from multiple domains.
Our framework is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and Diffusion Models.
arXiv Detail & Related papers (2024-01-23T09:49:24Z) - AdapterSoup: Weight Averaging to Improve Generalization of Pretrained
Language Models [127.04370753583261]
Pretrained language models (PLMs) are trained on massive corpora, but often need to specialize to specific domains.
A solution is to use a related-domain adapter for the novel domain at test time.
We introduce AdapterSoup, an approach that performs weight-space averaging of adapters trained on different domains.
arXiv Detail & Related papers (2023-02-14T13:09:23Z) - StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for
One-shot and Few-shot Domain Adaptation [4.943054375935879]
We provide a systematic and in-depth analysis of the domain adaptation problem of GANs, focusing on the StyleGAN model.
We propose new efficient and lightweight parameterizations of StyleGAN for domain adaptation.
arXiv Detail & Related papers (2022-12-20T13:07:20Z) - DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains [26.95350186287616]
Few-shot domain adaptation to multiple domains aims to learn a complex image distribution across multiple domains from a few training images.
We propose DynaGAN, a novel few-shot domain-adaptation method for multiple target domains.
arXiv Detail & Related papers (2022-11-26T12:46:40Z) - Multi-step domain adaptation by adversarial attack to $\mathcal{H}
\Delta \mathcal{H}$-divergence [73.89838982331453]
In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversarial examples to improve source accuracy on the target domain.
We conducted a range of experiments and achieved improvement in accuracy on Digits and Office-Home datasets.
arXiv Detail & Related papers (2022-07-18T21:24:05Z) - Connecting adversarial attacks and optimal transport for domain
adaptation [116.50515978657002]
In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain.
In our method, we use optimal transport to map target samples to the domain named source fiction.
Our main idea is to generate a source fiction by c-cyclically monotone transformation over the target domain.
arXiv Detail & Related papers (2022-05-30T20:45:55Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - Self-Adversarial Disentangling for Specific Domain Adaptation [52.1935168534351]
Domain adaptation aims to bridge the domain shifts between the source and target domains.
Recent methods typically do not consider explicit prior knowledge on a specific dimension.
arXiv Detail & Related papers (2021-08-08T02:36:45Z) - Dynamic Transfer for Multi-Source Domain Adaptation [82.54405157719641]
We present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples.
It breaks down source domain barriers and turns multi-source domains into a single-source domain.
Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3%.
arXiv Detail & Related papers (2021-03-19T01:22:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.