SPPL: Probabilistic Programming with Fast Exact Symbolic Inference
- URL: http://arxiv.org/abs/2010.03485v3
- Date: Fri, 11 Jun 2021 12:21:13 GMT
- Title: SPPL: Probabilistic Programming with Fast Exact Symbolic Inference
- Authors: Feras A. Saad, Martin C. Rinard, Vikash K. Mansinghka
- Abstract summary: Sum-Product Probabilistic Language (SPPL) delivers exact solutions to a broad range of probabilistic inference queries.
SPPL translates probabilistic programs into sum-product expressions, a new symbolic representation and associated semantic domain.
We implement a prototype of SPPL with a modular architecture and evaluate it on benchmarks the system targets.
- Score: 2.371061885439857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the Sum-Product Probabilistic Language (SPPL), a new probabilistic
programming language that automatically delivers exact solutions to a broad
range of probabilistic inference queries. SPPL translates probabilistic
programs into sum-product expressions, a new symbolic representation and
associated semantic domain that extends standard sum-product networks to
support mixed-type distributions, numeric transformations, logical formulas,
and pointwise and set-valued constraints. We formalize SPPL via a novel
translation strategy from probabilistic programs to sum-product expressions and
give sound exact algorithms for conditioning on and computing probabilities of
events. SPPL imposes a collection of restrictions on probabilistic programs to
ensure they can be translated into sum-product expressions, which allow the
system to leverage new techniques for improving the scalability of translation
and inference by automatically exploiting probabilistic structure. We implement
a prototype of SPPL with a modular architecture and evaluate it on benchmarks
the system targets, showing that it obtains up to 3500x speedups over
state-of-the-art symbolic systems on tasks such as verifying the fairness of
decision tree classifiers, smoothing hidden Markov models, conditioning
transformed random variables, and computing rare event probabilities.
Related papers
- Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - Do LLMs Play Dice? Exploring Probability Distribution Sampling in Large Language Models for Behavioral Simulation [73.58618024960968]
An increasing number of studies are employing large language models (LLMs) as agents to emulate the sequential decision-making processes of humans.
This arouses curiosity regarding the capacity of LLM agents to comprehend probability distributions.
Our analysis indicates that LLM agents can understand probabilities, but they struggle with probability sampling.
arXiv Detail & Related papers (2024-04-13T16:59:28Z) - Scalable Neural-Probabilistic Answer Set Programming [18.136093815001423]
We introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logic program, united via answer set programming (ASP)
We show how to prune the insignificantally insignificant parts of the (ground) program, speeding up reasoning without sacrificing the predictive performance.
We evaluate SLASH on a variety of different tasks, including the benchmark task of MNIST addition and Visual Question Answering (VQA)
arXiv Detail & Related papers (2023-06-14T09:45:29Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Marginal Inference queries in Hidden Markov Models under context-free
grammar constraints [0.348097307252416]
We address the question of computing the likelihood of context-free grammars (CFGs) in Hidden Models (HMMs)
We show that the problem is NP-Hard, even with the promise that CFG has a degree of ambiguity less than or equal to 2.
We then propose a fully randomized approximation scheme to approximate the likelihood for the case of ambiguous CFGs.
arXiv Detail & Related papers (2022-06-26T12:44:18Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - flip-hoisting: Exploiting Repeated Parameters in Discrete Probabilistic
Programs [25.320181572646135]
We present a program analysis and associated optimization, flip-hoisting, that collapses repetitious parameters in discrete probabilistic programs to improve inference performance.
We implement flip-hoisting in an existing probabilistic programming language and show empirically that it significantly improves inference performance.
arXiv Detail & Related papers (2021-10-19T22:04:26Z) - Probabilistic Generating Circuits [50.98473654244851]
We propose probabilistic generating circuits (PGCs) for their efficient representation.
PGCs are not just a theoretical framework that unifies vastly different existing models, but also show huge potential in modeling realistic data.
We exhibit a simple class of PGCs that are not trivially subsumed by simple combinations of PCs and DPPs, and obtain competitive performance on a suite of density estimation benchmarks.
arXiv Detail & Related papers (2021-02-19T07:06:53Z) - Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program
Analysis [9.204612164524947]
Probabilistic software analysis aims at quantifying the probability of a target event occurring during the execution of a program.
We present SYMbolic Parallel Adaptive Importance Sampling (SYMPAIS), a new inference method tailored to analyze path conditions generated from the symbolic execution of programs.
arXiv Detail & Related papers (2020-10-10T17:39:12Z) - Transforming Probabilistic Programs for Model Checking [0.0]
We apply static analysis to probabilistic programs to automate large parts of two crucial model checking methods.
Our method transforms a probabilistic program specifying a density function into an efficient forward-sampling form.
We present an implementation targeting the popular Stan probabilistic programming language.
arXiv Detail & Related papers (2020-08-21T21:06:34Z) - Synthetic Datasets for Neural Program Synthesis [66.20924952964117]
We propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance.
arXiv Detail & Related papers (2019-12-27T21:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.