LPMLN, Weak Constraints, and P-log
- URL: http://arxiv.org/abs/2506.12784v1
- Date: Sun, 15 Jun 2025 09:28:20 GMT
- Title: LPMLN, Weak Constraints, and P-log
- Authors: Joohyung Lee, Zhun Yang,
- Abstract summary: LPMLN is a formalism that extends answer set programs by adopting the log-linear weight scheme of Markov Logic.<n>We present a translation of LPMLN into programs with weak constraints and a translation of P-log into LPMLN, which complement the existing translations in the opposite directions.
- Score: 9.110296007838533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LPMLN is a recently introduced formalism that extends answer set programs by adopting the log-linear weight scheme of Markov Logic. This paper investigates the relationships between LPMLN and two other extensions of answer set programs: weak constraints to express a quantitative preference among answer sets, and P-log to incorporate probabilistic uncertainty. We present a translation of LPMLN into programs with weak constraints and a translation of P-log into LPMLN, which complement the existing translations in the opposite directions. The first translation allows us to compute the most probable stable models (i.e., MAP estimates) of LPMLN programs using standard ASP solvers. This result can be extended to other formalisms, such as Markov Logic, ProbLog, and Pearl's Causal Models, that are shown to be translatable into LPMLN. The second translation tells us how probabilistic nonmonotonicity (the ability of the reasoner to change his probabilistic model as a result of new information) of P-log can be represented in LPMLN, which yields a way to compute P-log using standard ASP solvers and MLN solvers.
Related papers
- Statistical Hypothesis Testing for Auditing Robustness in Language Models [49.1574468325115]
We introduce distribution-based perturbation analysis, a framework that reformulates perturbation analysis as a frequentist hypothesis testing problem.<n>We construct empirical null and alternative output distributions within a low-dimensional semantic similarity space via Monte Carlo sampling.<n>We show how we can quantify response changes, measure true/false positive rates, and evaluate alignment with reference models.
arXiv Detail & Related papers (2025-06-09T17:11:07Z) - LLM-Guided Probabilistic Program Induction for POMDP Model Estimation [40.98644220584212]
Partially Observable Markov Decision Processes (POMDPs) model decision making under uncertainty.<n>We are interested in a subclass of POMDPs wherein the components of the model, including the observation function, reward function, transition function, and initial state distribution function, can be modeled as low-complexity probabilistic graphical models.
arXiv Detail & Related papers (2025-05-04T18:59:07Z) - On Scaling Neurosymbolic Programming through Guided Logical Inference [1.124958340749622]
We propose a new approach centered around an exact algorithmNL, that enables bypassing the computation of the logical provenance.<n>We show that this approach can be adapted for approximate reasoning with $epsilon$ or $(epsilon, delta)$ guarantees, called ApproxDPNL.
arXiv Detail & Related papers (2025-01-30T08:49:25Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - Semirings for Probabilistic and Neuro-Symbolic Logic Programming [15.747744148181829]
We show that many extensions of probabilistic logic programming can be cast within a common algebraic logic programming framework.
This does not only hold for the PLP variations itself but also for the underlying execution mechanism that is based on (algebraic) model counting.
arXiv Detail & Related papers (2024-02-21T13:06:52Z) - Hybrid Probabilistic Logic Programming: Inference and Learning [1.14219428942199]
This thesis focuses on advancing probabilistic logic programming (PLP), which combines probability theory for uncertainty and logic programming for relations.
The first contribution is the introduction of context-specific likelihood weighting (CS-LW), a new sampling algorithm that exploits context-specific independencies for computational gains.
Next, a new hybrid PLP, DC#, is introduced, which integrates the syntax of Distributional Clauses with Bayesian logic programs and represents three types of independencies.
The scalable inference algorithm FO-CS-LW is introduced for DC#.
arXiv Detail & Related papers (2023-02-01T15:07:36Z) - Latent Bottlenecked Attentive Neural Processes [71.18817592128207]
We present Latent Bottlenecked Attentive Neural Processes (LBANPs)
LBANPs have a querying computational complexity independent of the number of context datapoints.
We show LBANPs achieve results competitive with the state-of-the-art on meta-regression, image completion, and contextual multi-armed bandits.
arXiv Detail & Related papers (2022-11-15T19:21:41Z) - Optimality Guarantees for Particle Belief Approximation of POMDPs [55.83001584645448]
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems.
POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid.
We propose a theory characterizing the approximation error of the particle filtering techniques that these algorithms use.
arXiv Detail & Related papers (2022-10-10T21:11:55Z) - Towards Semantic Communication Protocols: A Probabilistic Logic
Perspective [69.68769942563812]
We propose a semantic protocol model (SPM) constructed by transforming an NPM into an interpretable symbolic graph written in the probabilistic logic programming language (ProbLog)
By leveraging its interpretability and memory-efficiency, we demonstrate several applications such as SPM reconfiguration for collision-avoidance.
arXiv Detail & Related papers (2022-07-08T14:19:36Z) - A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs [117.82903457289584]
We derive a novel problem-dependent lower-bound for regret in finite-horizon Markov Decision Processes (MDPs)
We show that our lower-bound is considerably smaller than in the general case and it does not scale with the minimum action gap at all.
We show that this last result is attainable (up to $poly(H)$ terms, where $H$ is the horizon) by providing a regret upper-bound based on policy gaps for an optimistic algorithm.
arXiv Detail & Related papers (2021-06-24T13:46:09Z) - Model Explainability in Deep Learning Based Natural Language Processing [0.0]
We reviewed and compared some popular machine learning model explainability methodologies.
We applied one of the NLP explainability methods to a NLP classification model.
We identified some common issues due to the special natures of NLP models.
arXiv Detail & Related papers (2021-06-14T13:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.