EigenGAN: Layer-Wise Eigen-Learning for GANs
- URL: http://arxiv.org/abs/2104.12476v1
- Date: Mon, 26 Apr 2021 11:14:37 GMT
- Title: EigenGAN: Layer-Wise Eigen-Learning for GANs
- Authors: Zhenliang He, Meina Kan, Shiguang Shan
- Abstract summary: EigenGAN is able to unsupervisedly mine interpretable and controllable dimensions from different generator layers.
By traversing the coefficient of a specific eigen-dimension, the generator can produce samples with continuous changes corresponding to a specific semantic attribute.
- Score: 84.33920839885619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies on Generative Adversarial Network (GAN) reveal that different
layers of a generative CNN hold different semantics of the synthesized images.
However, few GAN models have explicit dimensions to control the semantic
attributes represented in a specific layer. This paper proposes EigenGAN which
is able to unsupervisedly mine interpretable and controllable dimensions from
different generator layers. Specifically, EigenGAN embeds one linear subspace
with orthogonal basis into each generator layer. Via the adversarial training
to learn a target distribution, these layer-wise subspaces automatically
discover a set of "eigen-dimensions" at each layer corresponding to a set of
semantic attributes or interpretable variations. By traversing the coefficient
of a specific eigen-dimension, the generator can produce samples with
continuous changes corresponding to a specific semantic attribute. Taking the
human face for example, EigenGAN can discover controllable dimensions for
high-level concepts such as pose and gender in the subspace of deep layers, as
well as low-level concepts such as hue and color in the subspace of shallow
layers. Moreover, under the linear circumstance, we theoretically prove that
our algorithm derives the principal components as PCA does. Codes can be found
in https://github.com/LynnHo/EigenGAN-Tensorflow.
Related papers
- SC2GAN: Rethinking Entanglement by Self-correcting Correlated GAN Space [16.040942072859075]
Gene Networks that achieve following editing directions for one attribute could result in entangled changes with other attributes.
We propose a novel framework SC$2$GAN disentanglement by re-projecting low-density latent code samples in the original latent space.
arXiv Detail & Related papers (2023-10-10T14:42:32Z) - Householder Projector for Unsupervised Latent Semantics Discovery [58.92485745195358]
Householder Projector helps StyleGANs to discover more disentangled and precise semantic attributes without sacrificing image fidelity.
We integrate our projector into pre-trained StyleGAN2/StyleGAN3 and evaluate the models on several benchmarks.
arXiv Detail & Related papers (2023-07-16T11:43:04Z) - Analyzing the Latent Space of GAN through Local Dimension Estimation [4.688163910878411]
style-based GANs (StyleGANs) in high-fidelity image synthesis have motivated research to understand the semantic properties of their latent spaces.
We propose a local dimension estimation algorithm for arbitrary intermediate layers in a pre-trained GAN model.
Our proposed metric, called Distortion, measures an inconsistency of intrinsic space on the learned latent space.
arXiv Detail & Related papers (2022-05-26T06:36:06Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - Layer-Wise Interpretation of Deep Neural Networks Using Identity
Initialization [3.708656266586146]
In this paper, we propose an interpretation method for a deep multilayer perceptron.
The proposed method allows us to analyze the contribution of each neuron to classification and class likelihood in each hidden layer.
arXiv Detail & Related papers (2021-02-26T07:15:41Z) - The Geometry of Deep Generative Image Models and its Applications [0.0]
Generative adversarial networks (GANs) have emerged as a powerful unsupervised method to model the statistical patterns of real-world data sets.
These networks are trained to map random inputs in their latent space to new samples representative of the learned data.
The structure of the latent space is hard to intuit due to its high dimensionality and the non-linearity of the generator.
arXiv Detail & Related papers (2021-01-15T07:57:33Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.