Generalized Adversarially Learned Inference
- URL: http://arxiv.org/abs/2006.08089v3
- Date: Mon, 21 Dec 2020 15:34:08 GMT
- Title: Generalized Adversarially Learned Inference
- Authors: Yatin Dandi, Homanga Bharadhwaj, Abhishek Kumar, Piyush Rai
- Abstract summary: We develop methods of inference of latent variables in GANs by adversarially training an image generator along with an encoder to match two joint distributions of image and latent vector pairs.
We incorporate multiple layers of feedback on reconstructions, self-supervision, and other forms of supervision based on prior or learned knowledge about the desired solutions.
- Score: 42.40405470084505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Allowing effective inference of latent vectors while training GANs can
greatly increase their applicability in various downstream tasks. Recent
approaches, such as ALI and BiGAN frameworks, develop methods of inference of
latent variables in GANs by adversarially training an image generator along
with an encoder to match two joint distributions of image and latent vector
pairs. We generalize these approaches to incorporate multiple layers of
feedback on reconstructions, self-supervision, and other forms of supervision
based on prior or learned knowledge about the desired solutions. We achieve
this by modifying the discriminator's objective to correctly identify more than
two joint distributions of tuples of an arbitrary number of random variables
consisting of images, latent vectors, and other variables generated through
auxiliary tasks, such as reconstruction and inpainting or as outputs of
suitable pre-trained models. We design a non-saturating maximization objective
for the generator-encoder pair and prove that the resulting adversarial game
corresponds to a global optimum that simultaneously matches all the
distributions. Within our proposed framework, we introduce a novel set of
techniques for providing self-supervised feedback to the model based on
properties, such as patch-level correspondence and cycle consistency of
reconstructions. Through comprehensive experiments, we demonstrate the
efficacy, scalability, and flexibility of the proposed approach for a variety
of tasks.
Related papers
- Disentanglement with Factor Quantized Variational Autoencoders [11.086500036180222]
We propose a discrete variational autoencoder (VAE) based model where the ground truth information about the generative factors are not provided to the model.
We demonstrate the advantages of learning discrete representations over learning continuous representations in facilitating disentanglement.
Our method called FactorQVAE is the first method that combines optimization based disentanglement approaches with discrete representation learning.
arXiv Detail & Related papers (2024-09-23T09:33:53Z) - Bridging Multicalibration and Out-of-distribution Generalization Beyond Covariate Shift [44.708914058803224]
We establish a new model-agnostic optimization framework for out-of-distribution generalization via multicalibration.
We propose MC-Pseudolabel, a post-processing algorithm to achieve both extended multicalibration and out-of-distribution generalization.
arXiv Detail & Related papers (2024-06-02T08:11:35Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - Parameter Decoupling Strategy for Semi-supervised 3D Left Atrium
Segmentation [0.0]
We present a novel semi-supervised segmentation model based on parameter decoupling strategy to encourage consistent predictions from diverse views.
Our method has achieved a competitive result over the state-of-the-art semisupervised methods on the Atrial Challenge dataset.
arXiv Detail & Related papers (2021-09-20T14:51:42Z) - InfoVAEGAN : learning joint interpretable representations by information
maximization and maximum likelihood [15.350366047108103]
We propose a representation learning algorithm which combines the inference abilities of Variational Autoencoders (VAE) with the capability of Generative Adversarial Networks (GAN)
The proposed model, called InfoVAEGAN, consists of three networks:generative Generator and Discriminator.
arXiv Detail & Related papers (2021-07-09T22:38:10Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.