Learning Energy-Based Model with Variational Auto-Encoder as Amortized
Sampler
- URL: http://arxiv.org/abs/2012.14936v1
- Date: Tue, 29 Dec 2020 20:46:40 GMT
- Title: Learning Energy-Based Model with Variational Auto-Encoder as Amortized
Sampler
- Authors: Jianwen Xie, Zilong Zheng, Ping Li
- Abstract summary: Training energy-based models (EBMs) by maximum likelihood requires Markov chain Monte Carlo sampling.
We learn a variational auto-encoder (VAE) to initialize the finite-step MCMC, such as Langevin dynamics that is derived from the energy function.
With these amortized MCMC samples, the EBM can be trained by maximum likelihood, which follows an "analysis by synthesis" scheme.
We call this joint training algorithm the variational MCMC teaching, in which the VAE chases the EBM toward data distribution.
- Score: 35.80109055748496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the intractable partition function, training energy-based models
(EBMs) by maximum likelihood requires Markov chain Monte Carlo (MCMC) sampling
to approximate the gradient of the Kullback-Leibler divergence between data and
model distributions. However, it is non-trivial to sample from an EBM because
of the difficulty of mixing between modes. In this paper, we propose to learn a
variational auto-encoder (VAE) to initialize the finite-step MCMC, such as
Langevin dynamics that is derived from the energy function, for efficient
amortized sampling of the EBM. With these amortized MCMC samples, the EBM can
be trained by maximum likelihood, which follows an "analysis by synthesis"
scheme; while the variational auto-encoder learns from these MCMC samples via
variational Bayes. We call this joint training algorithm the variational MCMC
teaching, in which the VAE chases the EBM toward data distribution. We
interpret the learning algorithm as a dynamic alternating projection in the
context of information geometry. Our proposed models can generate samples
comparable to GANs and EBMs. Additionally, we demonstrate that our models can
learn effective probabilistic distribution toward supervised conditional
learning experiments.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Generalized Contrastive Divergence: Joint Training of Energy-Based Model
and Diffusion Model through Inverse Reinforcement Learning [13.22531381403974]
Generalized Contrastive Divergence (GCD) is a novel objective function for training an energy-based model (EBM) and a sampler simultaneously.
We present preliminary yet promising results showing that joint training is beneficial for both EBM and a diffusion model.
arXiv Detail & Related papers (2023-12-06T10:10:21Z) - Learning Energy-based Model via Dual-MCMC Teaching [5.31573596283377]
Learning the energy-based model (EBM) can be achieved using the maximum likelihood estimation (MLE)
This paper studies the fundamental learning problem of the energy-based model (EBM)
arXiv Detail & Related papers (2023-12-05T03:39:54Z) - STANLEY: Stochastic Gradient Anisotropic Langevin Dynamics for Learning
Energy-Based Models [41.031470884141775]
We present an end-to-end learning algorithm for Energy-Based models (EBM)
We propose in this paper, a novel high dimensional sampling method, based on an anisotropic stepsize and a gradient-informed covariance matrix.
Our resulting method, namely STANLEY, is an optimization algorithm for training Energy-Based models via our newly introduced MCMC method.
arXiv Detail & Related papers (2023-10-19T11:55:16Z) - Learning Energy-Based Prior Model with Diffusion-Amortized MCMC [89.95629196907082]
Common practice of learning latent space EBMs with non-convergent short-run MCMC for prior and posterior sampling is hindering the model from further progress.
We introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it.
arXiv Detail & Related papers (2023-10-05T00:23:34Z) - Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - Balanced Training of Energy-Based Models with Adaptive Flow Sampling [13.951904929884618]
Energy-based models (EBMs) are versatile density estimation models that directly parameterize an unnormalized log density.
We propose a new maximum likelihood training algorithm for EBMs that uses a different type of generative model, normalizing flows (NF)
Our method fits an NF to an EBM during training so that an NF-assisted sampling scheme provides an accurate gradient for the EBMs at all times.
arXiv Detail & Related papers (2023-06-01T13:58:06Z) - Particle Dynamics for Learning EBMs [83.59335980576637]
Energy-based modeling is a promising approach to unsupervised learning, which yields many downstream applications from a single model.
The main difficulty in learning energy-based models with the "contrastive approaches" is the generation of samples from the current energy function at each iteration.
This paper proposes an alternative approach to getting these samples and avoiding crude MCMC sampling from the current model.
arXiv Detail & Related papers (2021-11-26T23:41:07Z) - No MCMC for me: Amortized sampling for fast and stable training of
energy-based models [62.1234885852552]
Energy-Based Models (EBMs) present a flexible and appealing way to represent uncertainty.
We present a simple method for training EBMs at scale using an entropy-regularized generator to amortize the MCMC sampling.
Next, we apply our estimator to the recently proposed Joint Energy Model (JEM), where we match the original performance with faster and stable training.
arXiv Detail & Related papers (2020-10-08T19:17:20Z) - MCMC Should Mix: Learning Energy-Based Model with Neural Transport
Latent Space MCMC [110.02001052791353]
Learning energy-based model (EBM) requires MCMC sampling of the learned model as an inner loop of the learning algorithm.
We show that the model has a particularly simple form in the space of the latent variables of the backbone model.
arXiv Detail & Related papers (2020-06-12T01:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.