Cross-Domain Latent Modulation for Variational Transfer Learning
- URL: http://arxiv.org/abs/2012.11727v1
- Date: Mon, 21 Dec 2020 22:45:00 GMT
- Title: Cross-Domain Latent Modulation for Variational Transfer Learning
- Authors: Jinyong Hou, Jeremiah D. Deng, Stephen Cranefield, Xuejie Ding
- Abstract summary: We propose a cross-domain latent modulation mechanism within a variational autoencoders (VAE) framework to enable improved transfer learning.
We apply the proposed model to a number of transfer learning tasks including unsupervised domain adaptation and image-to-image translation.
- Score: 1.9212368803706577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a cross-domain latent modulation mechanism within a variational
autoencoders (VAE) framework to enable improved transfer learning. Our key idea
is to procure deep representations from one data domain and use it as
perturbation to the reparameterization of the latent variable in another
domain. Specifically, deep representations of the source and target domains are
first extracted by a unified inference model and aligned by employing gradient
reversal. Second, the learned deep representations are cross-modulated to the
latent encoding of the alternate domain. The consistency between the
reconstruction from the modulated latent encoding and the generation using deep
representation samples is then enforced in order to produce inter-class
alignment in the latent space. We apply the proposed model to a number of
transfer learning tasks including unsupervised domain adaptation and
image-toimage translation. Experimental results show that our model gives
competitive performance.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - FAN-Net: Fourier-Based Adaptive Normalization For Cross-Domain Stroke
Lesion Segmentation [17.150527504559594]
We propose a novel FAN-Net, a U-Net-based segmentation network incorporated with a Fourier-based adaptive normalization (FAN)
The experimental results on the ATLAS dataset, which consists of MR images from 9 sites, show the superior performance of the proposed FAN-Net compared with baseline methods.
arXiv Detail & Related papers (2023-04-23T06:58:21Z) - Domain Generalisation via Domain Adaptation: An Adversarial Fourier
Amplitude Approach [13.642506915023871]
We adversarially synthesise the worst-case target domain and adapt a model to that worst-case domain.
On the DomainBedNet dataset, the proposed approach yields significantly improved domain generalisation performance.
arXiv Detail & Related papers (2023-02-23T14:19:07Z) - Adversarial Bi-Regressor Network for Domain Adaptive Regression [52.5168835502987]
It is essential to learn a cross-domain regressor to mitigate the domain shift.
This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model.
arXiv Detail & Related papers (2022-09-20T18:38:28Z) - Variational Transfer Learning using Cross-Domain Latent Modulation [1.9662978733004601]
We introduce a novel cross-domain latent modulation mechanism to a variational autoencoder framework so as to achieve effective transfer learning.
Deep representations of the source and target domains are first extracted by a unified inference model and aligned by employing gradient reversal.
The learned deep representations are then cross-modulated to the latent encoding of the alternative domain, where consistency constraints are also applied.
arXiv Detail & Related papers (2022-05-31T03:47:08Z) - Exploring Sequence Feature Alignment for Domain Adaptive Detection
Transformers [141.70707071815653]
We propose a novel Sequence Feature Alignment (SFA) method that is specially designed for the adaptation of detection transformers.
SFA consists of a domain query-based feature alignment (DQFA) module and a token-wise feature alignment (TDA) module.
Experiments on three challenging benchmarks show that SFA outperforms state-of-the-art domain adaptive object detection methods.
arXiv Detail & Related papers (2021-07-27T07:17:12Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Variational Interaction Information Maximization for Cross-domain
Disentanglement [34.08140408283391]
Cross-domain disentanglement is the problem of learning representations partitioned into domain-invariant and domain-specific representations.
We cast the simultaneous learning of domain-invariant and domain-specific representations as a joint objective of multiple information constraints.
We show that our model achieves the state-of-the-art performance in the zero-shot sketch based image retrieval task.
arXiv Detail & Related papers (2020-12-08T07:11:35Z) - Deep Adversarial Transition Learning using Cross-Grafted Generative
Stacks [3.756448228784421]
We present a novel "deep adversarial transition learning" (DATL) framework that bridges the domain gap.
We construct variational auto-encoders (VAEs) for the two domains, and form bidirectional transitions by cross-grafting the VAEs' decoder stacks.
generative adversarial networks (GAN) are employed for domain adaptation, mapping the target domain data to the known label space of the source domain.
arXiv Detail & Related papers (2020-09-25T04:25:27Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.