Signature-Based Abduction with Fresh Individuals and Complex Concepts
for Description Logics (Extended Version)
- URL: http://arxiv.org/abs/2105.00274v1
- Date: Sat, 1 May 2021 14:55:46 GMT
- Title: Signature-Based Abduction with Fresh Individuals and Complex Concepts
for Description Logics (Extended Version)
- Authors: Patrick Koopmann
- Abstract summary: ABox abduction aims at computing a hypothesis that, when added to the knowledge base, is sufficient to entail the observation.
In signature-based ABox abduction, the hypothesis is further required to use only names from a given set.
It is possible that hypotheses for a given observation only exist if we admit the use of fresh individuals and/or complex concepts built from the given signature.
- Score: 12.107259467873092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a knowledge base and an observation as a set of facts, ABox abduction
aims at computing a hypothesis that, when added to the knowledge base, is
sufficient to entail the observation. In signature-based ABox abduction, the
hypothesis is further required to use only names from a given set. This form of
abduction has applications such as diagnosis, KB repair, or explaining missing
entailments. It is possible that hypotheses for a given observation only exist
if we admit the use of fresh individuals and/or complex concepts built from the
given signature, something most approaches for ABox abduction so far do not
support or only support with restrictions. In this paper, we investigate the
computational complexity of this form of abduction -- allowing either fresh
individuals, complex concepts, or both -- for various description logics, and
give size bounds on the hypotheses if they exist.
Related papers
- Advancing Abductive Reasoning in Knowledge Graphs through Complex Logical Hypothesis Generation [43.26412690886471]
This paper introduces the task of complex logical hypothesis generation, as an initial step towards abductive logical reasoning with Knowledge Graph.
We find that the supervised trained generative model can generate logical hypotheses that are structurally closer to the reference hypothesis.
We introduce the Reinforcement Learning from Knowledge Graph (RLF-KG) method, which minimizes differences between observations and conclusions drawn from generated hypotheses according to the KG.
arXiv Detail & Related papers (2023-12-25T08:06:20Z) - Log-linear Guardedness and its Implications [116.87322784046926]
Methods for erasing human-interpretable concepts from neural representations that assume linearity have been found to be tractable and useful.
This work formally defines the notion of log-linear guardedness as the inability of an adversary to predict the concept directly from the representation.
We show that, in the binary case, under certain assumptions, a downstream log-linear model cannot recover the erased concept.
arXiv Detail & Related papers (2022-10-18T17:30:02Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Connection-minimal Abduction in EL via Translation to FOL -- Technical
Report [12.90382979353427]
We show how to compute a class of connection-minimal hypotheses in a sound and complete way.
Our technique is based on a translation to first-order logic, and constructs hypotheses based on prime implicates.
arXiv Detail & Related papers (2022-05-17T15:50:27Z) - Quantification and Aggregation over Concepts of the Ontology [0.0]
We argue that in some KR applications, we want to quantify over sets of concepts formally represented by symbols in the vocabulary.
We present an extension of first-order logic to support such abstractions, and show that it allows writing expressions of knowledge that are elaboration tolerant.
arXiv Detail & Related papers (2022-02-02T07:49:23Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - BoxE: A Box Embedding Model for Knowledge Base Completion [53.57588201197374]
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB)
Existing embedding models are subject to at least one of the following limitations.
BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes)
arXiv Detail & Related papers (2020-07-13T09:40:49Z) - Signature-Based Abduction for Expressive Description Logics -- Technical
Report [20.882083414450882]
We present the first complete method solving signature-based abduction for observations expressed in the expressive description logic ALC.
The method is guaranteed to compute a finite and complete set of hypotheses, and is evaluated on a set of realistic knowledge bases.
arXiv Detail & Related papers (2020-07-01T21:06:24Z) - Boosting Simple Learners [45.09968166110557]
We focus on two main questions: (i) Complexity: How many weak hypotheses are needed to produce an accurate hypothesis?
We design a novel boosting algorithm which circumvents a classical lower bound by Freund and Schapire ('95, '12)
We provide an affirmative answer to the second question for well-studied corollary classes, including half-spaces and decision stumps.
arXiv Detail & Related papers (2020-01-31T08:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.