Connecting Proof Theory and Knowledge Representation: Sequent Calculi
and the Chase with Existential Rules
- URL: http://arxiv.org/abs/2306.02521v1
- Date: Mon, 5 Jun 2023 01:10:23 GMT
- Title: Connecting Proof Theory and Knowledge Representation: Sequent Calculi
and the Chase with Existential Rules
- Authors: Tim S. Lyon and Piotr Ostropolski-Nalewaja
- Abstract summary: We show that the chase mechanism in the context of existential rules is in essence the same as proof-search in an extension of Gentzen's sequent calculus for first-order logic.
We formally connect a central tool for establishing decidability proof-theoretically with a central decidability tool in the context of knowledge representation.
- Score: 1.8275108630751844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chase algorithms are indispensable in the domain of knowledge base querying,
which enable the extraction of implicit knowledge from a given database via
applications of rules from a given ontology. Such algorithms have proved
beneficial in identifying logical languages which admit decidable query
entailment. Within the discipline of proof theory, sequent calculi have been
used to write and design proof-search algorithms to identify decidable classes
of logics. In this paper, we show that the chase mechanism in the context of
existential rules is in essence the same as proof-search in an extension of
Gentzen's sequent calculus for first-order logic. Moreover, we show that
proof-search generates universal models of knowledge bases, a feature also
exhibited by the chase. Thus, we formally connect a central tool for
establishing decidability proof-theoretically with a central decidability tool
in the context of knowledge representation.
Related papers
- TabVer: Tabular Fact Verification with Natural Logic [11.002475880349452]
We propose a set-theoretic interpretation of numerals and arithmetic functions in the context of natural logic.
We leverage large language models to generate arithmetic expressions by generating questions about salient parts of a claim which are answered by executing functions on tables.
In a few-shot setting on FEVEROUS, we achieve an accuracy of 71.4, outperforming both fully neural and symbolic reasoning models by 3.4 points.
arXiv Detail & Related papers (2024-11-02T00:36:34Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Implementing Dynamic Programming in Computability Logic Web [0.0]
We present a novel definition of an algorithm and its corresponding algorithm language called CoLweb.
CoLweb forces us to a high-level, proof-carrying, distributed-style approach to algorithm design for both non-distributed computing and distributed one.
We refine Horn clause definitions into two kinds: blind-univerally-quantified (BUQ) ones and parallel-universally-quantified (PUQ) ones.
arXiv Detail & Related papers (2023-04-04T05:33:43Z) - Adversarial Learning to Reason in an Arbitrary Logic [5.076419064097733]
Existing approaches to learning to prove theorems focus on particular logics and datasets.
We propose Monte-Carlo simulations guided by reinforcement learning that can work in an arbitrarily specified logic.
arXiv Detail & Related papers (2022-04-06T11:25:19Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Refining Labelled Systems for Modal and Constructive Logics with
Applications [0.0]
This thesis serves as a means of transforming the semantics of a modal and/or constructive logic into an 'economical' proof system.
The refinement method connects two proof-theoretic paradigms: labelled and nested sequent calculi.
The introduced refined labelled calculi will be used to provide the first proof-search algorithms for deontic STIT logics.
arXiv Detail & Related papers (2021-07-30T08:27:15Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Extending Automated Deduction for Commonsense Reasoning [0.0]
The paper argues that the methods and algorithms used by automated reasoners for classical first-order logic can be extended towards commonsense reasoning.
The proposed extensions mostly rely on ordinary proof trees and are devised to handle commonsense knowledge bases containing inconsistencies, default rules, operating on topics, relevance, confidence and similarity measures.
We claim that machine learning is best suited for the construction of commonsense knowledge bases while the extended logic-based methods would be well-suited for actually answering queries from these knowledge bases.
arXiv Detail & Related papers (2020-03-29T23:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.