Learning Proposals for Probabilistic Programs with Inference Combinators
- URL: http://arxiv.org/abs/2103.00668v2
- Date: Wed, 3 Mar 2021 18:47:15 GMT
- Title: Learning Proposals for Probabilistic Programs with Inference Combinators
- Authors: Sam Stites, Heiko Zimmermann, Hao Wu, Eli Sennesh, Jan-Willem van de
Meent
- Abstract summary: We develop operators for construction of proposals in probabilistic programs.
Proposals in inference samplers can be parameterized using neural networks.
We demonstrate the flexibility of this framework by implementing advanced variational methods.
- Score: 9.227032708135617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop operators for construction of proposals in probabilistic programs,
which we refer to as inference combinators. Inference combinators define a
grammar over importance samplers that compose primitive operations such as
application of a transition kernel and importance resampling. Proposals in
these samplers can be parameterized using neural networks, which in turn can be
trained by optimizing variational objectives. The result is a framework for
user-programmable variational methods that are correct by construction and can
be tailored to specific models. We demonstrate the flexibility of this
framework by implementing advanced variational methods based on amortized Gibbs
sampling and annealing.
Related papers
- Training Survival Models using Scoring Rules [9.330089124239086]
Survival Analysis provides critical insights for incomplete time-to-event data.
It is also an important example of probabilistic machine learning.
We establish different parametric and non-parametric sub-frameworks that allow different degrees of flexibility.
We show that using our framework, we can recover various parametric models and demonstrate that optimization works equally well when compared to likelihood-based methods.
arXiv Detail & Related papers (2024-03-19T20:58:38Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - Exploiting Inferential Structure in Neural Processes [15.058161307401864]
Neural Processes (NPs) are appealing due to their ability to perform fast adaptation based on a context set.
We provide a framework that allows NPs' latent variable to be given a rich prior defined by a graphical model.
arXiv Detail & Related papers (2023-06-27T03:01:43Z) - Compositional Probabilistic and Causal Inference using Tractable Circuit
Models [20.07977560803858]
We introduce md-vtrees, a novel structural formulation of (marginal) determinism in structured decomposable PCs.
We derive the first polytime algorithms for causal inference queries such as backdoor adjustment on PCs.
arXiv Detail & Related papers (2023-04-17T13:48:16Z) - Variable Importance Matching for Causal Inference [73.25504313552516]
We describe a general framework called Model-to-Match that achieves these goals.
Model-to-Match uses variable importance measurements to construct a distance metric.
We operationalize the Model-to-Match framework with LASSO.
arXiv Detail & Related papers (2023-02-23T00:43:03Z) - Federated Variational Inference Methods for Structured Latent Variable
Models [1.0312968200748118]
Federated learning methods enable model training across distributed data sources without data leaving their original locations.
We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning.
We also provide a communication-efficient variant analogous to the canonical FedAvg algorithm.
arXiv Detail & Related papers (2023-02-07T08:35:04Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Recursive Monte Carlo and Variational Inference with Auxiliary Variables [64.25762042361839]
Recursive auxiliary-variable inference (RAVI) is a new framework for exploiting flexible proposals.
RAVI generalizes and unifies several existing methods for inference with expressive expressive families.
We show RAVI's design framework and theorems by using them to analyze and improve upon Salimans et al.'s Markov Chain Variational Inference.
arXiv Detail & Related papers (2022-03-05T23:52:40Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Generalized Adversarially Learned Inference [42.40405470084505]
We develop methods of inference of latent variables in GANs by adversarially training an image generator along with an encoder to match two joint distributions of image and latent vector pairs.
We incorporate multiple layers of feedback on reconstructions, self-supervision, and other forms of supervision based on prior or learned knowledge about the desired solutions.
arXiv Detail & Related papers (2020-06-15T02:18:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.