ControlVAE: Tuning, Analytical Properties, and Performance Analysis
- URL: http://arxiv.org/abs/2011.01754v1
- Date: Sat, 31 Oct 2020 12:32:39 GMT
- Title: ControlVAE: Tuning, Analytical Properties, and Performance Analysis
- Authors: Huajie Shao, Zhisheng Xiao, Shuochao Yao, Aston Zhang, Shengzhong Liu
and Tarek Abdelzaher
- Abstract summary: ControlVAE is a new variational autoencoder framework.
It stabilizes the KL-divergence of VAE models to a specified value.
It can achieve a good trade-off between reconstruction quality and KL-divergence.
- Score: 14.272917020105147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper reviews the novel concept of controllable variational autoencoder
(ControlVAE), discusses its parameter tuning to meet application needs, derives
its key analytic properties, and offers useful extensions and applications.
ControlVAE is a new variational autoencoder (VAE) framework that combines the
automatic control theory with the basic VAE to stabilize the KL-divergence of
VAE models to a specified value. It leverages a non-linear PI controller, a
variant of the proportional-integral-derivative (PID) control, to dynamically
tune the weight of the KL-divergence term in the evidence lower bound (ELBO)
using the output KL-divergence as feedback. This allows us to precisely control
the KL-divergence to a desired value (set point), which is effective in
avoiding posterior collapse and learning disentangled representations. In order
to improve the ELBO over the regular VAE, we provide simplified theoretical
analysis to inform setting the set point of KL-divergence for ControlVAE. We
observe that compared to other methods that seek to balance the two terms in
VAE's objective, ControlVAE leads to better learning dynamics. In particular,
it can achieve a good trade-off between reconstruction quality and
KL-divergence. We evaluate the proposed method on three tasks: image
generation, language modeling and disentangled representation learning. The
results show that ControlVAE can achieve much better reconstruction quality
than the other methods for comparable disentanglement. On the language modeling
task, ControlVAE can avoid posterior collapse (KL vanishing) and improve the
diversity of generated text. Moreover, our method can change the optimization
trajectory, improving the ELBO and the reconstruction quality for image
generation.
Related papers
- How to train your VAE [0.0]
Variational Autoencoders (VAEs) have become a cornerstone in generative modeling and representation learning within machine learning.
This paper explores interpreting the Kullback-Leibler (KL) Divergence, a critical component within the Evidence Lower Bound (ELBO)
The proposed method redefines the ELBO with a mixture of Gaussians for the posterior probability, introduces a regularization term, and employs a PatchGAN discriminator to enhance texture realism.
arXiv Detail & Related papers (2023-09-22T19:52:28Z) - Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - On Controller Tuning with Time-Varying Bayesian Optimization [74.57758188038375]
We will use time-varying optimization (TVBO) to tune controllers online in changing environments using appropriate prior knowledge on the control objective and its changes.
We propose a novel TVBO strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes.
Our model outperforms the state-of-the-art method in TVBO, exhibiting reduced regret and fewer unstable parameter configurations.
arXiv Detail & Related papers (2022-07-22T14:54:13Z) - Adaptive Model Predictive Control by Learning Classifiers [26.052368583196426]
We propose an adaptive MPC variant that automatically estimates control and model parameters.
We leverage recent results showing that BO can be formulated as a density ratio estimation.
This is then integrated into a model predictive path integral control framework yielding robust controllers for a variety of challenging robotics tasks.
arXiv Detail & Related papers (2022-03-13T23:22:12Z) - Is Disentanglement enough? On Latent Representations for Controllable
Music Generation [78.8942067357231]
In the absence of a strong generative decoder, disentanglement does not necessarily imply controllability.
The structure of the latent space with respect to the VAE-decoder plays an important role in boosting the ability of a generative model to manipulate different attributes.
arXiv Detail & Related papers (2021-08-01T18:37:43Z) - DiffLoop: Tuning PID controllers by differentiating through the feedback
loop [8.477619837043214]
This paper investigates PID tuning and anti-windup measures.
In particular, we use a cost function and generate gradients to improve controller performance.
arXiv Detail & Related papers (2021-06-19T15:26:46Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - DynamicVAE: Decoupling Reconstruction Error and Disentangled
Representation Learning [15.317044259237043]
This paper challenges the common assumption that the weight $beta$, in $beta$-VAE, should be larger than $1$ in order to effectively disentangle latent factors.
We demonstrate that $beta$-VAE, with $beta 1$, can not only attain good disentanglement but also significantly improve reconstruction accuracy via dynamic control.
arXiv Detail & Related papers (2020-09-15T00:01:11Z) - ControlVAE: Controllable Variational Autoencoder [16.83870832766681]
Variational Autoencoders (VAE) have been widely used in a variety of applications, such as dialog generation, image generation and disentangled representation learning.
ControlVAE combines a controller, inspired by automatic control theory, with the basic VAE to improve the performance of resulting generative models.
arXiv Detail & Related papers (2020-04-13T15:04:56Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.