Learning Energy-based Model via Dual-MCMC Teaching
- URL: http://arxiv.org/abs/2312.02469v1
- Date: Tue, 5 Dec 2023 03:39:54 GMT
- Title: Learning Energy-based Model via Dual-MCMC Teaching
- Authors: Jiali Cui, Tian Han
- Abstract summary: Learning the energy-based model (EBM) can be achieved using the maximum likelihood estimation (MLE)
This paper studies the fundamental learning problem of the energy-based model (EBM)
- Score: 5.31573596283377
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper studies the fundamental learning problem of the energy-based model
(EBM). Learning the EBM can be achieved using the maximum likelihood estimation
(MLE), which typically involves the Markov Chain Monte Carlo (MCMC) sampling,
such as the Langevin dynamics. However, the noise-initialized Langevin dynamics
can be challenging in practice and hard to mix. This motivates the exploration
of joint training with the generator model where the generator model serves as
a complementary model to bypass MCMC sampling. However, such a method can be
less accurate than the MCMC and result in biased EBM learning. While the
generator can also serve as an initializer model for better MCMC sampling, its
learning can be biased since it only matches the EBM and has no access to
empirical training examples. Such biased generator learning may limit the
potential of learning the EBM. To address this issue, we present a joint
learning framework that interweaves the maximum likelihood learning algorithm
for both the EBM and the complementary generator model. In particular, the
generator model is learned by MLE to match both the EBM and the empirical data
distribution, making it a more informative initializer for MCMC sampling of
EBM. Learning generator with observed examples typically requires inference of
the generator posterior. To ensure accurate and efficient inference, we adopt
the MCMC posterior sampling and introduce a complementary inference model to
initialize such latent MCMC sampling. We show that three separate models can be
seamlessly integrated into our joint framework through two (dual-) MCMC
teaching, enabling effective and efficient EBM learning.
Related papers
- Learning Energy-Based Prior Model with Diffusion-Amortized MCMC [89.95629196907082]
Common practice of learning latent space EBMs with non-convergent short-run MCMC for prior and posterior sampling is hindering the model from further progress.
We introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it.
arXiv Detail & Related papers (2023-10-05T00:23:34Z) - Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - Knowledge Removal in Sampling-based Bayesian Inference [86.14397783398711]
When single data deletion requests come, companies may need to delete the whole models learned with massive resources.
Existing works propose methods to remove knowledge learned from data for explicitly parameterized models.
In this paper, we propose the first machine unlearning algorithm for MCMC.
arXiv Detail & Related papers (2022-03-24T10:03:01Z) - How to Train Your Energy-Based Models [19.65375049263317]
Energy-Based Models (EBMs) specify probability density or mass functions up to an unknown normalizing constant.
This tutorial is targeted at an audience with basic understanding of generative models who want to apply EBMs or start a research project in this direction.
arXiv Detail & Related papers (2021-01-09T04:51:31Z) - Learning Energy-Based Model with Variational Auto-Encoder as Amortized
Sampler [35.80109055748496]
Training energy-based models (EBMs) by maximum likelihood requires Markov chain Monte Carlo sampling.
We learn a variational auto-encoder (VAE) to initialize the finite-step MCMC, such as Langevin dynamics that is derived from the energy function.
With these amortized MCMC samples, the EBM can be trained by maximum likelihood, which follows an "analysis by synthesis" scheme.
We call this joint training algorithm the variational MCMC teaching, in which the VAE chases the EBM toward data distribution.
arXiv Detail & Related papers (2020-12-29T20:46:40Z) - No MCMC for me: Amortized sampling for fast and stable training of
energy-based models [62.1234885852552]
Energy-Based Models (EBMs) present a flexible and appealing way to represent uncertainty.
We present a simple method for training EBMs at scale using an entropy-regularized generator to amortize the MCMC sampling.
Next, we apply our estimator to the recently proposed Joint Energy Model (JEM), where we match the original performance with faster and stable training.
arXiv Detail & Related papers (2020-10-08T19:17:20Z) - Learning Latent Space Energy-Based Prior Model [118.86447805707094]
We learn energy-based model (EBM) in the latent space of a generator model.
We show that the learned model exhibits strong performances in terms of image and text generation and anomaly detection.
arXiv Detail & Related papers (2020-06-15T08:11:58Z) - MCMC Should Mix: Learning Energy-Based Model with Neural Transport
Latent Space MCMC [110.02001052791353]
Learning energy-based model (EBM) requires MCMC sampling of the learned model as an inner loop of the learning algorithm.
We show that the model has a particularly simple form in the space of the latent variables of the backbone model.
arXiv Detail & Related papers (2020-06-12T01:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.