RoutingGAN: Routing Age Progression and Regression with Disentangled
Learning
- URL: http://arxiv.org/abs/2102.00601v1
- Date: Mon, 1 Feb 2021 02:57:32 GMT
- Title: RoutingGAN: Routing Age Progression and Regression with Disentangled
Learning
- Authors: Zhizhong Huang and Junping Zhang and Hongming Shan
- Abstract summary: This paper introduces a dropout-like method based on GAN(RoutingGAN) to route different effects in a high-level semantic feature space.
We first disentangle the age-invariant features from the input face, and then gradually add the effects to the features by residual routers.
Experimental results on two benchmarked datasets demonstrate superior performance over existing methods both qualitatively and quantitatively.
- Score: 20.579282497730944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although impressive results have been achieved for age progression and
regression, there remain two major issues in generative adversarial networks
(GANs)-based methods: 1) conditional GANs (cGANs)-based methods can learn
various effects between any two age groups in a single model, but are
insufficient to characterize some specific patterns due to completely shared
convolutions filters; and 2) GANs-based methods can, by utilizing several
models to learn effects independently, learn some specific patterns, however,
they are cumbersome and require age label in advance. To address these
deficiencies and have the best of both worlds, this paper introduces a
dropout-like method based on GAN~(RoutingGAN) to route different effects in a
high-level semantic feature space. Specifically, we first disentangle the
age-invariant features from the input face, and then gradually add the effects
to the features by residual routers that assign the convolution filters to
different age groups by dropping out the outputs of others. As a result, the
proposed RoutingGAN can simultaneously learn various effects in a single model,
with convolution filters being shared in part to learn some specific effects.
Experimental results on two benchmarked datasets demonstrate superior
performance over existing methods both qualitatively and quantitatively.
Related papers
- A Differentiable Partially Observable Generalized Linear Model with
Forward-Backward Message Passing [2.600709013150986]
We propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works.
Our new method yields more interpretable parameters, underscoring its significance in neuroscience.
arXiv Detail & Related papers (2024-02-02T09:34:49Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity
in Data-Efficient GANs [24.18718734850797]
Data-Efficient GANs (DE-GANs) aim to learn generative models with a limited amount of training data.
Contrastive learning has shown the great potential of increasing the synthesis quality of DE-GANs.
We propose FakeCLR, which only applies contrastive learning on fake samples.
arXiv Detail & Related papers (2022-07-18T14:23:38Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression [12.415463205960156]
We introduce Batch Inverse-Variance, a loss function which is robust to near-ground truth samples, and allows to control the effective learning rate.
Our experimental results show that BIV improves significantly the performance of the networks on two noisy datasets.
arXiv Detail & Related papers (2021-07-09T15:39:31Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - AgeFlow: Conditional Age Progression and Regression with Normalizing
Flows [19.45760984401544]
Age progression and regression aim to synthesize photorealistic appearance of a given face image with aging and rejuvenation effects, respectively.
Existing generative adversarial networks (GANs) based methods suffer from the following three major issues.
This paper proposes a novel framework, termed AgeFlow, to integrate the advantages of both flow-based models and GANs.
arXiv Detail & Related papers (2021-05-15T15:02:07Z) - Rethinking conditional GAN training: An approach using geometrically
structured latent manifolds [58.07468272236356]
Conditional GANs (cGAN) suffer from critical drawbacks such as the lack of diversity in generated outputs.
We propose a novel training mechanism that increases both the diversity and the visual quality of a vanilla cGAN.
arXiv Detail & Related papers (2020-11-25T22:54:11Z) - Recent Developments Combining Ensemble Smoother and Deep Generative
Networks for Facies History Matching [58.720142291102135]
This research project focuses on the use of autoencoders networks to construct a continuous parameterization for facies models.
We benchmark seven different formulations, including VAE, generative adversarial network (GAN), Wasserstein GAN, variational auto-encoding GAN, principal component analysis (PCA) with cycle GAN, PCA with transfer style network and VAE with style loss.
arXiv Detail & Related papers (2020-05-08T21:32:42Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.