Unbiased Learning of Deep Generative Models with Structured Discrete
Representations
- URL: http://arxiv.org/abs/2306.08230v2
- Date: Tue, 14 Nov 2023 06:52:21 GMT
- Title: Unbiased Learning of Deep Generative Models with Structured Discrete
Representations
- Authors: Harry Bendekgey, Gabriel Hope and Erik B. Sudderth
- Abstract summary: We propose novel algorithms for learning structured variational autoencoders (SVAEs)
We are the first to demonstrate the SVAE's ability to handle multimodal uncertainty when data is missing by incorporating discrete latent variables.
Our memory-efficient implicit differentiation scheme makes the SVAE tractable to learn via gradient descent, while demonstrating robustness to incomplete optimization.
- Score: 7.9057320008285945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By composing graphical models with deep learning architectures, we learn
generative models with the strengths of both frameworks. The structured
variational autoencoder (SVAE) inherits structure and interpretability from
graphical models, and flexible likelihoods for high-dimensional data from deep
learning, but poses substantial optimization challenges. We propose novel
algorithms for learning SVAEs, and are the first to demonstrate the SVAE's
ability to handle multimodal uncertainty when data is missing by incorporating
discrete latent variables. Our memory-efficient implicit differentiation scheme
makes the SVAE tractable to learn via gradient descent, while demonstrating
robustness to incomplete optimization. To more rapidly learn accurate graphical
model parameters, we derive a method for computing natural gradients without
manual derivations, which avoids biases found in prior work. These optimization
innovations enable the first comparisons of the SVAE to state-of-the-art time
series models, where the SVAE performs competitively while learning
interpretable and structured discrete data representations.
Related papers
- Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Revisiting Structured Variational Autoencoders [11.998116457994994]
Structured variational autoencoders (SVAEs) combine probabilistic graphical model priors on latent variables, deep neural networks to link latent variables to observed data, and structure-exploiting algorithms for approximate posterior inference.
Despite their conceptual elegance, SVAEs have proven difficult to implement, and more general approaches have been favored in practice.
Here, we revisit SVAEs using modern machine learning tools and demonstrate their advantages over more general alternatives in terms of both accuracy and efficiency.
arXiv Detail & Related papers (2023-05-25T23:51:18Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Learning Time-Varying Graphs from Online Data [39.21234914444073]
This work proposes an algorithmic framework to learn time-varying graphs from online data.
It renders it model-independent, i.e., it can be theoretically analyzed in its abstract formulation.
We specialize the framework to three well-known graph learning models, namely, the Gaussian graphical model (GGM), the structural equation model (SEM), and the smoothness-based model (SBM)
arXiv Detail & Related papers (2021-10-21T09:46:44Z) - LocalGLMnet: interpretable deep learning for tabular data [0.0]
We propose a new network architecture that shares similar features as generalized linear models.
Our approach provides an additive decomposition in the spirit of Shapley values and integrated gradients.
arXiv Detail & Related papers (2021-07-23T07:38:33Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Learning Variational Data Assimilation Models and Solvers [34.22350850350653]
We introduce end-to-end neural network architectures for data assimilation.
A key feature of the proposed end-to-end learning architecture is that we may train the NN models using both supervised and unsupervised strategies.
arXiv Detail & Related papers (2020-07-25T14:28:48Z) - Causality-aware counterfactual confounding adjustment for feature
representations learned by deep models [14.554818659491644]
Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML)
We describe how a recently proposed counterfactual approach can still be used to deconfound the feature representations learned by deep neural network (DNN) models.
arXiv Detail & Related papers (2020-04-20T17:37:36Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.