DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains
- URL: http://arxiv.org/abs/2211.14554v1
- Date: Sat, 26 Nov 2022 12:46:40 GMT
- Title: DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains
- Authors: Seongtae Kim, Kyoungkook Kang, Geonung Kim, Seung-Hwan Baek, Sunghyun
Cho
- Abstract summary: Few-shot domain adaptation to multiple domains aims to learn a complex image distribution across multiple domains from a few training images.
We propose DynaGAN, a novel few-shot domain-adaptation method for multiple target domains.
- Score: 26.95350186287616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot domain adaptation to multiple domains aims to learn a complex image
distribution across multiple domains from a few training images. A na\"ive
solution here is to train a separate model for each domain using few-shot
domain adaptation methods. Unfortunately, this approach mandates
linearly-scaled computational resources both in memory and computation time
and, more importantly, such separate models cannot exploit the shared knowledge
between target domains. In this paper, we propose DynaGAN, a novel few-shot
domain-adaptation method for multiple target domains. DynaGAN has an adaptation
module, which is a hyper-network that dynamically adapts a pretrained GAN model
into the multiple target domains. Hence, we can fully exploit the shared
knowledge across target domains and avoid the linearly-scaled computational
requirements. As it is still computationally challenging to adapt a large-size
GAN model, we design our adaptation module light-weight using the rank-1 tensor
decomposition. Lastly, we propose a contrastive-adaptation loss suitable for
multi-domain few-shot adaptation. We validate the effectiveness of our method
through extensive qualitative and quantitative evaluations.
Related papers
- Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation [3.367755441623275]
Multi-source unsupervised domain adaptation (MUDA) aims to transfer knowledge from related source domains to an unlabeled target domain.
We propose a novel approach called Dynamic Domain Discrepancy Adjustment for Active Multi-Domain Adaptation (D3AAMDA)
This mechanism controls the alignment level of features between each source domain and the target domain, effectively leveraging the local advantageous feature information within the source domains.
arXiv Detail & Related papers (2023-07-26T09:40:19Z) - Multi-Domain Learning with Modulation Adapters [33.54630534228469]
Multi-domain learning aims to handle related tasks, such as image classification across multiple domains, simultaneously.
Modulation Adapters update the convolutional weights of the model in a multiplicative manner for each task.
Our approach yields excellent results, with accuracies that are comparable to or better than those of existing state-of-the-art approaches.
arXiv Detail & Related papers (2023-07-17T14:40:16Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - CADA: Multi-scale Collaborative Adversarial Domain Adaptation for
Unsupervised Optic Disc and Cup Segmentation [3.587294308501889]
We propose a novel unsupervised domain adaptation framework, called Collaborative Adrial Domain Adaptation (CADA)
Our proposed CADA is an interactive paradigm that presents an exquisite collaborative adaptation through both adversarial learning and ensembling weights at different network layers.
We show that our CADA model incorporating multi-scale input training can overcome performance degradation and outperform state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-10-05T23:44:26Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - Multi-Source Domain Adaptation for Text Classification via
DistanceNet-Bandits [101.68525259222164]
We present a study of various distance-based measures in the context of NLP tasks, that characterize the dissimilarity between domains based on sample estimates.
We develop a DistanceNet model which uses these distance measures as an additional loss function to be minimized jointly with the task's loss function.
We extend this model to a novel DistanceNet-Bandit model, which employs a multi-armed bandit controller to dynamically switch between multiple source domains.
arXiv Detail & Related papers (2020-01-13T15:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.