Non-ground Abductive Logic Programming with Probabilistic Integrity
Constraints
- URL: http://arxiv.org/abs/2108.03033v1
- Date: Fri, 6 Aug 2021 10:22:12 GMT
- Title: Non-ground Abductive Logic Programming with Probabilistic Integrity
Constraints
- Authors: Elena Bellodi, Marco Gavanelli, Riccardo Zese, Evelina Lamma, Fabrizio
Riguzzi
- Abstract summary: In this paper, we consider a richer logic language, coping with probabilistic abduction with variables.
We first present the overall abductive language, and its semantics according to the Distribution Semantics.
We then introduce a proof procedure, obtained by extending one previously presented, and prove its soundness and completeness.
- Score: 2.624902795082451
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertain information is being taken into account in an increasing number of
application fields. In the meantime, abduction has been proved a powerful tool
for handling hypothetical reasoning and incomplete knowledge. Probabilistic
logical models are a suitable framework to handle uncertain information, and in
the last decade many probabilistic logical languages have been proposed, as
well as inference and learning systems for them. In the realm of Abductive
Logic Programming (ALP), a variety of proof procedures have been defined as
well. In this paper, we consider a richer logic language, coping with
probabilistic abduction with variables. In particular, we consider an ALP
program enriched with integrity constraints `a la IFF, possibly annotated with
a probability value. We first present the overall abductive language, and its
semantics according to the Distribution Semantics. We then introduce a proof
procedure, obtained by extending one previously presented, and prove its
soundness and completeness.
Related papers
- Log Probabilities Are a Reliable Estimate of Semantic Plausibility in Base and Instruction-Tuned Language Models [50.15455336684986]
We evaluate the effectiveness of LogProbs and basic prompting to measure semantic plausibility.
We find that LogProbs offers a more reliable measure of semantic plausibility than direct zero-shot prompting.
We conclude that, even in the era of prompt-based evaluations, LogProbs constitute a useful metric of semantic plausibility.
arXiv Detail & Related papers (2024-03-21T22:08:44Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning [0.0]
We present dPASP, a novel declarative logic programming framework for differentiable neuro-symbolic reasoning.
We discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge.
We then describe an implemented package that supports inference and learning in the language, along with several example programs.
arXiv Detail & Related papers (2023-08-05T19:36:58Z) - "What if?" in Probabilistic Logic Programming [2.9005223064604078]
A ProbLog program is a logic program with facts that only hold with a specified probability.
We extend this ProbLog language by the ability to answer "What if" queries.
arXiv Detail & Related papers (2023-05-24T16:35:24Z) - smProbLog: Stable Model Semantics in ProbLog for Probabilistic
Argumentation [19.46250467634934]
We show that the programs representing probabilistic argumentation frameworks do not satisfy a common assumption in probabilistic logic programming (PLP) semantics.
The second contribution is then a novel PLP semantics for programs where a choice of probabilistic facts does not uniquely determine the truth assignment of the logical atoms.
The third contribution is the implementation of a PLP system supporting this semantics: smProbLog.
arXiv Detail & Related papers (2023-04-03T10:59:25Z) - Probabilistic unifying relations for modelling epistemic and aleatoric uncertainty: semantics and automated reasoning with theorem proving [0.3441021278275805]
Probabilistic programming combines general computer programming, statistical inference, and formal semantics.
ProbURel is based on Hehner's predicative probabilistic programming, but there are several obstacles to the broader adoption of his work.
Our contributions include the formalisation of relations using Unifying Theories of Programming (UTP) and probabilities outside the brackets.
We demonstrate our work with six examples, including problems in robot localisation, classification in machine learning, and the termination of probabilistic loops.
arXiv Detail & Related papers (2023-03-16T23:36:57Z) - $\omega$PAP Spaces: Reasoning Denotationally About Higher-Order,
Recursive Probabilistic and Differentiable Programs [64.25762042361839]
$omega$PAP spaces are spaces for reasoning denotationally about expressive differentiable and probabilistic programming languages.
Our semantics is general enough to assign meanings to most practical probabilistic and differentiable programs.
We establish the almost-everywhere differentiability of probabilistic programs' trace density functions.
arXiv Detail & Related papers (2023-02-21T12:50:05Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.