Aleatoric Description Logic for Probailistic Reasoning (Long Version)
- URL: http://arxiv.org/abs/2108.13036v1
- Date: Mon, 30 Aug 2021 07:47:36 GMT
- Title: Aleatoric Description Logic for Probailistic Reasoning (Long Version)
- Authors: Tim French and Tom Smoker
- Abstract summary: Aleatoric description logic models uncertainty in the world as aleatoric events, by the roll of the dice, where an agent has subjective beliefs about the bias of these dice.
This provides a subjective Bayesian description logic, where propositions and relations are assigned probabilities according to what a rational agent would bet.
Several computational problems are considered and model-checking and consistency checking algorithms are presented.
- Score: 0.2538209532048866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Description logics are a powerful tool for describing ontological knowledge
bases. That is, they give a factual account of the world in terms of
individuals, concepts and relations. In the presence of uncertainty, such
factual accounts are not feasible, and a subjective or epistemic approach is
required. Aleatoric description logic models uncertainty in the world as
aleatoric events, by the roll of the dice, where an agent has subjective
beliefs about the bias of these dice. This provides a subjective Bayesian
description logic, where propositions and relations are assigned probabilities
according to what a rational agent would bet, given a configuration of possible
individuals and dice. Aleatoric description logic is shown to generalise the
description logic ALC, and can be seen to describe a probability space of
interpretations of a restriction of ALC where all roles are functions. Several
computational problems are considered and model-checking and consistency
checking algorithms are presented. Finally, aleatoric description logic is
shown to be able to model learning, where agents are able to condition their
beliefs on the bias of dice according to observations.
Related papers
- QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios [15.193544498311603]
We present QUITE, a dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships.
We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types.
Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning.
arXiv Detail & Related papers (2024-10-14T12:44:59Z) - Explaining Explanations in Probabilistic Logic Programming [0.0]
In most approaches, the system is considered a black box, making it difficult to generate appropriate explanations.
We consider a setting where models are transparent: probabilistic logic programming (PLP), a paradigm that combines logic programming for knowledge representation and probability to model uncertainty.
We present in this paper an approach to explaining explanations which is based on defining a new query-driven inference mechanism for PLP where proofs are labeled with "choice expressions", a compact and easy to manipulate representation for sets of choices.
arXiv Detail & Related papers (2024-01-30T14:27:37Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z) - Foundations of Reasoning with Uncertainty via Real-valued Logics [70.43924776071616]
We give a sound and strongly complete axiomatization that can be parametrized to cover essentially every real-valued logic.
Our class of sentences are very rich, and each describes a set of possible real values for a collection of formulas of the real-valued logic.
arXiv Detail & Related papers (2020-08-06T02:13:11Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.