STANLEY: Stochastic Gradient Anisotropic Langevin Dynamics for Learning
Energy-Based Models
- URL: http://arxiv.org/abs/2310.12667v1
- Date: Thu, 19 Oct 2023 11:55:16 GMT
- Title: STANLEY: Stochastic Gradient Anisotropic Langevin Dynamics for Learning
Energy-Based Models
- Authors: Belhal Karimi, Jianwen Xie, Ping Li
- Abstract summary: We present an end-to-end learning algorithm for Energy-Based models (EBM)
We propose in this paper, a novel high dimensional sampling method, based on an anisotropic stepsize and a gradient-informed covariance matrix.
Our resulting method, namely STANLEY, is an optimization algorithm for training Energy-Based models via our newly introduced MCMC method.
- Score: 41.031470884141775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose in this paper, STANLEY, a STochastic gradient ANisotropic LangEvin
dYnamics, for sampling high dimensional data. With the growing efficacy and
potential of Energy-Based modeling, also known as non-normalized probabilistic
modeling, for modeling a generative process of different natures of high
dimensional data observations, we present an end-to-end learning algorithm for
Energy-Based models (EBM) with the purpose of improving the quality of the
resulting sampled data points. While the unknown normalizing constant of EBMs
makes the training procedure intractable, resorting to Markov Chain Monte Carlo
(MCMC) is in general a viable option. Realizing what MCMC entails for the EBM
training, we propose in this paper, a novel high dimensional sampling method,
based on an anisotropic stepsize and a gradient-informed covariance matrix,
embedded into a discretized Langevin diffusion. We motivate the necessity for
an anisotropic update of the negative samples in the Markov Chain by the
nonlinearity of the backbone of the EBM, here a Convolutional Neural Network.
Our resulting method, namely STANLEY, is an optimization algorithm for training
Energy-Based models via our newly introduced MCMC method. We provide a
theoretical understanding of our sampling scheme by proving that the sampler
leads to a geometrically uniformly ergodic Markov Chain. Several image
generation experiments are provided in our paper to show the effectiveness of
our method.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Learning Energy-Based Prior Model with Diffusion-Amortized MCMC [89.95629196907082]
Common practice of learning latent space EBMs with non-convergent short-run MCMC for prior and posterior sampling is hindering the model from further progress.
We introduce a simple but effective diffusion-based amortization method for long-run MCMC sampling and develop a novel learning algorithm for the latent space EBM based on it.
arXiv Detail & Related papers (2023-10-05T00:23:34Z) - Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - Balanced Training of Energy-Based Models with Adaptive Flow Sampling [13.951904929884618]
Energy-based models (EBMs) are versatile density estimation models that directly parameterize an unnormalized log density.
We propose a new maximum likelihood training algorithm for EBMs that uses a different type of generative model, normalizing flows (NF)
Our method fits an NF to an EBM during training so that an NF-assisted sampling scheme provides an accurate gradient for the EBMs at all times.
arXiv Detail & Related papers (2023-06-01T13:58:06Z) - GANs and Closures: Micro-Macro Consistency in Multiscale Modeling [0.0]
We present an approach that couples physics-based simulations and biasing methods for sampling conditional distributions with Machine Learning-based conditional generative adversarial networks.
We show that this framework can improve multiscale SDE dynamical systems sampling, and even shows promise for systems of increasing complexity.
arXiv Detail & Related papers (2022-08-23T03:45:39Z) - Particle Dynamics for Learning EBMs [83.59335980576637]
Energy-based modeling is a promising approach to unsupervised learning, which yields many downstream applications from a single model.
The main difficulty in learning energy-based models with the "contrastive approaches" is the generation of samples from the current energy function at each iteration.
This paper proposes an alternative approach to getting these samples and avoiding crude MCMC sampling from the current model.
arXiv Detail & Related papers (2021-11-26T23:41:07Z) - Learning Energy-Based Model with Variational Auto-Encoder as Amortized
Sampler [35.80109055748496]
Training energy-based models (EBMs) by maximum likelihood requires Markov chain Monte Carlo sampling.
We learn a variational auto-encoder (VAE) to initialize the finite-step MCMC, such as Langevin dynamics that is derived from the energy function.
With these amortized MCMC samples, the EBM can be trained by maximum likelihood, which follows an "analysis by synthesis" scheme.
We call this joint training algorithm the variational MCMC teaching, in which the VAE chases the EBM toward data distribution.
arXiv Detail & Related papers (2020-12-29T20:46:40Z) - No MCMC for me: Amortized sampling for fast and stable training of
energy-based models [62.1234885852552]
Energy-Based Models (EBMs) present a flexible and appealing way to represent uncertainty.
We present a simple method for training EBMs at scale using an entropy-regularized generator to amortize the MCMC sampling.
Next, we apply our estimator to the recently proposed Joint Energy Model (JEM), where we match the original performance with faster and stable training.
arXiv Detail & Related papers (2020-10-08T19:17:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.