plingo: A system for probabilistic reasoning in clingo based on lpmln
- URL: http://arxiv.org/abs/2206.11515v1
- Date: Thu, 23 Jun 2022 07:51:10 GMT
- Title: plingo: A system for probabilistic reasoning in clingo based on lpmln
- Authors: Susana Hahn (1), Tomi Janhunen (2), Roland Kaminski (1), Javier Romero
(1), Nicolas R\"uhling (1), Torsten Schaub (1) ((1) University of Potsdam,
Germany, (2) Tampere University, Finland)
- Abstract summary: We present plingo, an extension of the ASP system clingo with various probabilistic reasoning modes.
Plingo is centered upon LPMLN, a probabilistic extension of ASP based on a weight scheme from Markov Logic.
We evaluate plingo's performance empirically by comparing it to other probabilistic systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present plingo, an extension of the ASP system clingo with various
probabilistic reasoning modes. Plingo is centered upon LP^MLN, a probabilistic
extension of ASP based on a weight scheme from Markov Logic. This choice is
motivated by the fact that the core probabilistic reasoning modes can be mapped
onto optimization problems and that LP^MLN may serve as a middle-ground
formalism connecting to other probabilistic approaches. As a result, plingo
offers three alternative frontends, for LP^MLN, P-log, and ProbLog. The
corresponding input languages and reasoning modes are implemented by means of
clingo's multi-shot and theory solving capabilities. The core of plingo amounts
to a re-implementation of LP^MLN in terms of modern ASP technology, extended by
an approximation technique based on a new method for answer set enumeration in
the order of optimality. We evaluate plingo's performance empirically by
comparing it to other probabilistic systems.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Pseudo-Likelihood Inference [16.934708242852558]
Pseudo-Likelihood Inference (PLI) is a new method that brings neural approximation into ABC, making it competitive on challenging Bayesian system identification tasks.
PLI allows for optimizing neural posteriors via gradient descent, does not rely on summary statistics, and enables multiple observations as input.
The effectiveness of PLI is evaluated on four classical SBI benchmark tasks and on a highly dynamic physical system.
arXiv Detail & Related papers (2023-11-28T10:17:52Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Scalable Neural-Probabilistic Answer Set Programming [18.136093815001423]
We introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logic program, united via answer set programming (ASP)
We show how to prune the insignificantally insignificant parts of the (ground) program, speeding up reasoning without sacrificing the predictive performance.
We evaluate SLASH on a variety of different tasks, including the benchmark task of MNIST addition and Visual Question Answering (VQA)
arXiv Detail & Related papers (2023-06-14T09:45:29Z) - smProbLog: Stable Model Semantics in ProbLog for Probabilistic
Argumentation [19.46250467634934]
We show that the programs representing probabilistic argumentation frameworks do not satisfy a common assumption in probabilistic logic programming (PLP) semantics.
The second contribution is then a novel PLP semantics for programs where a choice of probabilistic facts does not uniquely determine the truth assignment of the logical atoms.
The third contribution is the implementation of a PLP system supporting this semantics: smProbLog.
arXiv Detail & Related papers (2023-04-03T10:59:25Z) - Declarative Probabilistic Logic Programming in Discrete-Continuous
Domains [23.638335996920116]
We contribute the hybrid distribution semantics together with the hybrid PLP language DC-ProbLog and its inference engine infinitesimal algebraic likelihood weighting (IALW)
We generalize the state-of-the-art of PLP towards hybrid PLP in three different aspects: semantics, language and inference.
IALW is the first inference algorithm for hybrid probabilistic programming based on knowledge compilation.
arXiv Detail & Related papers (2023-02-21T13:50:38Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.