Distilling Representations from GAN Generator via Squeeze and Span
- URL: http://arxiv.org/abs/2211.03000v1
- Date: Sun, 6 Nov 2022 01:10:28 GMT
- Title: Distilling Representations from GAN Generator via Squeeze and Span
- Authors: Yu Yang, Xiaotian Cheng, Chang Liu, Hakan Bilen, Xiangyang Ji
- Abstract summary: We propose to distill knowledge from GAN generators by squeezing and spanning their representations.
We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs.
- Score: 55.76208869775715
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, generative adversarial networks (GANs) have been an actively
studied topic and shown to successfully produce high-quality realistic images
in various domains. The controllable synthesis ability of GAN generators
suggests that they maintain informative, disentangled, and explainable image
representations, but leveraging and transferring their representations to
downstream tasks is largely unexplored. In this paper, we propose to distill
knowledge from GAN generators by squeezing and spanning their representations.
We squeeze the generator features into representations that are invariant to
semantic-preserving transformations through a network before they are distilled
into the student network. We span the distilled representation of the synthetic
domain to the real domain by also using real training data to remedy the mode
collapse of GANs and boost the student network performance in a real domain.
Experiments justify the efficacy of our method and reveal its great
significance in self-supervised representation learning. Code is available at
https://github.com/yangyu12/squeeze-and-span.
Related papers
- U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation [48.40120035775506]
Kolmogorov-Arnold Networks (KANs) reshape the neural network learning via the stack of non-linear learnable activation functions.
We investigate, modify and re-design the established U-Net pipeline by integrating the dedicated KAN layers on the tokenized intermediate representation, termed U-KAN.
We further delved into the potential of U-KAN as an alternative U-Net noise predictor in diffusion models, demonstrating its applicability in generating task-oriented model architectures.
arXiv Detail & Related papers (2024-06-05T04:13:03Z) - Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training [20.03447539784024]
We propose a novel approach for training GANs with images as inputs, but without enforcing any pairwise constraints.
The process can be made efficient by identifying closely related datasets, or a friendly neighborhood'' of the target distribution.
We show that the Spider GAN formulation results in faster convergence, as the generator can discover correspondence even between seemingly unrelated datasets.
arXiv Detail & Related papers (2023-05-12T17:03:18Z) - GH-Feat: Learning Versatile Generative Hierarchical Features from GANs [61.208757845344074]
We show that a generative feature learned from image synthesis exhibits great potentials in solving a wide range of computer vision tasks.
We first train an encoder by considering the pretrained StyleGAN generator as a learned loss function.
The visual features produced by our encoder, termed as Generative Hierarchical Features (GH-Feat), highly align with the layer-wise GAN representations.
arXiv Detail & Related papers (2023-01-12T21:59:46Z) - Extracting Semantic Knowledge from GANs with Unsupervised Learning [65.32631025780631]
Generative Adversarial Networks (GANs) encode semantics in feature maps in a linearly separable form.
We propose a novel clustering algorithm, named KLiSH, which leverages the linear separability to cluster GAN's features.
KLiSH succeeds in extracting fine-grained semantics of GANs trained on datasets of various objects.
arXiv Detail & Related papers (2022-11-30T03:18:16Z) - Gated SwitchGAN for multi-domain facial image translation [12.501699058042439]
We propose a switch generative adversarial network (SwitchGAN) with a more adaptive discriminator structure and a matched generator to perform delicate image translation.
A feature-switching operation is proposed to achieve feature selection and fusion in our conditional modules.
Experiments on the Morph, RaFD and CelebA databases visually and quantitatively show that our extended SwitchGAN can achieve better translation results than StarGAN, AttGAN and STGAN.
arXiv Detail & Related papers (2021-11-28T10:24:43Z) - Towards Discovery and Attribution of Open-world GAN Generated Images [18.10496076534083]
We present an iterative algorithm for discovering images generated from previously unseen GANs.
Our algorithm consists of multiple components including network training, out-of-distribution detection, clustering, merge and refine steps.
Our experiments demonstrate the effectiveness of our approach to discover new GANs and can be used in an open-world setup.
arXiv Detail & Related papers (2021-05-10T18:00:13Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.