Transforming Probabilistic Programs for Model Checking
- URL: http://arxiv.org/abs/2008.09680v1
- Date: Fri, 21 Aug 2020 21:06:34 GMT
- Title: Transforming Probabilistic Programs for Model Checking
- Authors: Ryan Bernstein, Matthijs V\'ak\'ar, Jeannette Wing
- Abstract summary: We apply static analysis to probabilistic programs to automate large parts of two crucial model checking methods.
Our method transforms a probabilistic program specifying a density function into an efficient forward-sampling form.
We present an implementation targeting the popular Stan probabilistic programming language.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic programming is perfectly suited to reliable and transparent
data science, as it allows the user to specify their models in a high-level
language without worrying about the complexities of how to fit the models.
Static analysis of probabilistic programs presents even further opportunities
for enabling a high-level style of programming, by automating time-consuming
and error-prone tasks. We apply static analysis to probabilistic programs to
automate large parts of two crucial model checking methods: Prior Predictive
Checks and Simulation-Based Calibration. Our method transforms a probabilistic
program specifying a density function into an efficient forward-sampling form.
To achieve this transformation, we extract a factor graph from a probabilistic
program using static analysis, generate a set of proposal directed acyclic
graphs using a SAT solver, select a graph which will produce provably correct
sampling code, then generate one or more sampling programs. We allow minimal
user interaction to broaden the scope of application beyond what is possible
with static analysis alone. We present an implementation targeting the popular
Stan probabilistic programming language, automating large parts of a robust
Bayesian workflow for a wide community of probabilistic programming users.
Related papers
- User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - Toward Trustworthy Neural Program Synthesis [6.3557174349423455]
We develop an approach to estimate the probability that a program sampled from a large language model is correct.
Given a natural language description of a programming problem, our method samples both candidate programs as well as candidate predicates specifying how the program should behave.
arXiv Detail & Related papers (2022-09-29T20:32:07Z) - Reliability analysis of discrete-state performance functions via
adaptive sequential sampling with detection of failure surfaces [0.0]
The paper presents a new efficient and robust method for rare event probability estimation.
The method can estimate the probabilities of multiple failure types.
It can accommodate this information to increase the accuracy of the estimated probabilities.
arXiv Detail & Related papers (2022-08-04T05:59:25Z) - Program Analysis of Probabilistic Programs [3.299672391663527]
dissertation presents three novel techniques to improve probabilistic programming using program analysis.
The techniques analyse a probabilistic program and adapt it to make inference more efficient, sometimes in a way that would have been tedious or impossible to do by hand.
arXiv Detail & Related papers (2022-04-14T10:40:54Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - flip-hoisting: Exploiting Repeated Parameters in Discrete Probabilistic
Programs [25.320181572646135]
We present a program analysis and associated optimization, flip-hoisting, that collapses repetitious parameters in discrete probabilistic programs to improve inference performance.
We implement flip-hoisting in an existing probabilistic programming language and show empirically that it significantly improves inference performance.
arXiv Detail & Related papers (2021-10-19T22:04:26Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - How to Design Sample and Computationally Efficient VQA Models [53.65668097847456]
We find that representing the text as probabilistic programs and images as object-level scene graphs best satisfy these desiderata.
We extend existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner.
arXiv Detail & Related papers (2021-03-22T01:48:16Z) - Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program
Analysis [9.204612164524947]
Probabilistic software analysis aims at quantifying the probability of a target event occurring during the execution of a program.
We present SYMbolic Parallel Adaptive Importance Sampling (SYMPAIS), a new inference method tailored to analyze path conditions generated from the symbolic execution of programs.
arXiv Detail & Related papers (2020-10-10T17:39:12Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.