Smoothness Similarity Regularization for Few-Shot GAN Adaptation
- URL: http://arxiv.org/abs/2308.09717v1
- Date: Fri, 18 Aug 2023 17:59:53 GMT
- Title: Smoothness Similarity Regularization for Few-Shot GAN Adaptation
- Authors: Vadim Sushko, Ruyu Wang, Juergen Gall
- Abstract summary: We propose a new smoothness similarity regularization that transfers the inherently learned smoothness of the pre-trained GAN to the few-shot target domain.
We evaluate our approach by adapting an unconditional and a class-conditional GAN to diverse few-shot target domains.
- Score: 16.92497517282215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of few-shot GAN adaptation aims to adapt a pre-trained GAN model to
a small dataset with very few training images. While existing methods perform
well when the dataset for pre-training is structurally similar to the target
dataset, the approaches suffer from training instabilities or memorization
issues when the objects in the two domains have a very different structure. To
mitigate this limitation, we propose a new smoothness similarity regularization
that transfers the inherently learned smoothness of the pre-trained GAN to the
few-shot target domain even if the two domains are very different. We evaluate
our approach by adapting an unconditional and a class-conditional GAN to
diverse few-shot target domains. Our proposed method significantly outperforms
prior few-shot GAN adaptation methods in the challenging case of structurally
dissimilar source-target domains, while performing on par with the state of the
art for similar source-target domains.
Related papers
- DARNet: Bridging Domain Gaps in Cross-Domain Few-Shot Segmentation with
Dynamic Adaptation [20.979759016826378]
Few-shot segmentation (FSS) aims to segment novel classes in a query image by using only a small number of supporting images from base classes.
In cross-domain FSS, leveraging features from label-rich domains for resource-constrained domains poses challenges due to domain discrepancies.
This work presents a Dynamically Adaptive Refine (DARNet) method that aims to balance generalization and specificity for CD-FSS.
arXiv Detail & Related papers (2023-12-08T03:03:22Z) - Adaptive Semantic Consistency for Cross-domain Few-shot Classification [27.176106714652327]
Cross-domain few-shot classification (CD-FSC) aims to identify novel target classes with a few samples.
We propose a simple plug-and-play Adaptive Semantic Consistency framework, which improves cross-domain robustness.
The proposed ASC enables explicit transfer of source domain knowledge to prevent the model from overfitting the target domain.
arXiv Detail & Related papers (2023-08-01T15:37:19Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Exploiting Diverse Characteristics and Adversarial Ambivalence for
Domain Adaptive Segmentation [20.13548631627542]
Adapting semantic segmentation models to new domains is an important but challenging problem.
We propose a condition-guided adaptation framework that is empowered by a special progressive adversarial training mechanism and a novel self-training policy.
We evaluate our method on various adaptation scenarios where the target images vary in weather conditions.
arXiv Detail & Related papers (2020-12-10T11:50:59Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.