Guaranteed Bounds for Posterior Inference in Universal Probabilistic
Programming
- URL: http://arxiv.org/abs/2204.02948v1
- Date: Wed, 6 Apr 2022 17:28:28 GMT
- Title: Guaranteed Bounds for Posterior Inference in Universal Probabilistic
Programming
- Authors: Raven Beutner, Luke Ong, Fabian Zaiser
- Abstract summary: We prove that the actual posterior of a given program is sandwiched between the lower and upper bounds (soundness)
We introduce a weight-aware interval type system, which automatically infers interval bounds on not just the return value but also weight of program executions, simultaneously.
Our evaluation on examples from the literature shows that the bounds are useful, and can even be used to recognise wrong outputs from posterior inference procedures.
- Score: 4.297070083645049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new method to approximate the posterior distribution of
probabilistic programs by means of computing guaranteed bounds. The starting
point of our work is an interval-based trace semantics for a recursive,
higher-order probabilistic programming language with continuous distributions.
Taking the form of (super-/subadditive) measures, these lower/upper bounds are
non-stochastic and provably correct: using the semantics, we prove that the
actual posterior of a given program is sandwiched between the lower and upper
bounds (soundness); moreover the bounds converge to the posterior
(completeness). As a practical and sound approximation, we introduce a
weight-aware interval type system, which automatically infers interval bounds
on not just the return value but also weight of program executions,
simultaneously. We have built a tool implementation, called GuBPI, which
automatically computes these posterior lower/upper bounds. Our evaluation on
examples from the literature shows that the bounds are useful, and can even be
used to recognise wrong outputs from stochastic posterior inference procedures.
Related papers
- Quantitative Verification of Omega-regular Properties in Probabilistic Programming [12.097479337434878]
We introduce temporal posterior inference, a new framework that unifies probabilistic programming with temporal logic.<n>We develop a new method for computing upper and lower bounds on the satisfaction of probabilities of omega-regular properties.<n>We implement our approach in a prototype tool, TPInfer, and evaluate it on a suite of benchmarks.
arXiv Detail & Related papers (2025-12-25T09:26:29Z) - Computing Approximated Fixpoints via Dampened Mann Iteration [0.31457219084519]
We show how to approximate the least fixpoint of functions that are not known precisely, but represented by a sequence of approximating functions that converge to them.
Our results can be used to iterate to the least fixpoint almost surely for systems where the function of interest can be approximated with given probabilistic error bounds.
arXiv Detail & Related papers (2025-01-15T16:52:21Z) - Almost-sure convergence of iterates and multipliers in stochastic
sequential quadratic optimization [21.022322975077653]
Methods for solving continuous optimization problems with equality constraints have attracted attention recently.
convergence guarantees have been limited to the expected value of a gradient to measure zero.
New almost-sure convergence guarantees for the primals, Lagrange measures and station measures generated by a SQP algorithm are proved.
arXiv Detail & Related papers (2023-08-07T16:03:40Z) - The Stochastic Proximal Distance Algorithm [5.3315823983402755]
We propose and analyze a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter.
We extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates.
We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks.
arXiv Detail & Related papers (2022-10-21T22:07:28Z) - A lower confidence sequence for the changing mean of non-negative right
heavy-tailed observations with bounded mean [9.289846887298854]
A confidence sequence produces an adapted sequence of sets for a predictable parameter sequence with a time-parametric coverage guarantee.
This work constructs a non-asymptotic lower CS for the running average conditional expectation whose slack converges to zero.
arXiv Detail & Related papers (2022-10-20T09:50:05Z) - Foundation Posteriors for Approximate Probabilistic Inference [11.64841553345271]
We formulate inference as masked language modeling in a probabilistic program.
We train a neural network to unmask the random values, defining an approximate posterior distribution.
We show the efficacy of the approach, zero-shot and fine-tuned, on a benchmark of STAN programs.
arXiv Detail & Related papers (2022-05-19T17:42:37Z) - Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
Convergence [69.65563161962245]
We consider a smooth and strongly convex objective function using a Newton method.
We show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage.
arXiv Detail & Related papers (2022-04-20T07:14:21Z) - Robust and Adaptive Temporal-Difference Learning Using An Ensemble of
Gaussian Processes [70.80716221080118]
The paper takes a generative perspective on policy evaluation via temporal-difference (TD) learning.
The OS-GPTD approach is developed to estimate the value function for a given policy by observing a sequence of state-reward pairs.
To alleviate the limited expressiveness associated with a single fixed kernel, a weighted ensemble (E) of GP priors is employed to yield an alternative scheme.
arXiv Detail & Related papers (2021-12-01T23:15:09Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - On the Almost Sure Convergence of Stochastic Gradient Descent in
Non-Convex Problems [75.58134963501094]
This paper analyzes the trajectories of gradient descent (SGD)
We show that SGD avoids saddle points/manifolds with $1$ for strict step-size policies.
arXiv Detail & Related papers (2020-06-19T14:11:26Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Regret and Belief Complexity Trade-off in Gaussian Process Bandits via
Information Thresholding [42.669970064867556]
We show how to characterize the trade-off between regret bounds of GP bandit algorithms and complexity of the posterior distributions.
We observe state of the art accuracy and complexity trade-offs for GP bandit algorithms applied to global optimization.
arXiv Detail & Related papers (2020-03-23T21:05:15Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.