Reasoning on $\textit{DL-Lite}_{\cal R}$ with Defeasibility in ASP
- URL: http://arxiv.org/abs/2106.14801v2
- Date: Wed, 30 Jun 2021 13:51:29 GMT
- Title: Reasoning on $\textit{DL-Lite}_{\cal R}$ with Defeasibility in ASP
- Authors: Loris Bozzato, Thomas Eiter, Luciano Serafini
- Abstract summary: We provide a definition for $textitDL-Lite_cal R$ knowledge bases with defeasible axioms and study their semantic and computational properties.
The limited form of $textitDL-Lite_cal R$ axioms allows us to formulate a simpler ASP encoding, where reasoning on negative information is managed by direct rules.
- Score: 16.79080135184303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning on defeasible knowledge is a topic of interest in the area of
description logics, as it is related to the need of representing exceptional
instances in knowledge bases. In this direction, in our previous works we
presented a framework for representing (contextualized) OWL RL knowledge bases
with a notion of justified exceptions on defeasible axioms: reasoning in such
framework is realized by a translation into ASP programs. The resulting
reasoning process for OWL RL, however, introduces a complex encoding in order
to capture reasoning on the negative information needed for reasoning on
exceptions. In this paper, we apply the justified exception approach to
knowledge bases in $\textit{DL-Lite}_{\cal R}$, i.e., the language underlying
OWL QL. We provide a definition for $\textit{DL-Lite}_{\cal R}$ knowledge bases
with defeasible axioms and study their semantic and computational properties.
In particular, we study the effects of exceptions over unnamed individuals. The
limited form of $\textit{DL-Lite}_{\cal R}$ axioms allows us to formulate a
simpler ASP encoding, where reasoning on negative information is managed by
direct rules. The resulting materialization method gives rise to a complete
reasoning procedure for instance checking in $\textit{DL-Lite}_{\cal R}$ with
defeasible axioms. Under consideration in Theory and Practice of Logic
Programming (TPLP).
Related papers
- General Reasoning Requires Learning to Reason from the Get-go [19.90997698310839]
Large Language Models (LLMs) have demonstrated impressive real-world utility.
But their ability to reason adaptively and robustly remains fragile.
We propose disangling knowledge and reasoning through three key directions.
arXiv Detail & Related papers (2025-02-26T18:51:12Z) - Pearce's Characterisation in an Epistemic Domain [0.0]
Equilibrium logic (EL) is a general-purpose nonmonotonic reasoning formalism.
Epistemic specifications (ES) are extensions of ASP-programs with subjective literals.
ES-programs are interpreted by world-views, which are essentially collections of answer-sets.
arXiv Detail & Related papers (2025-02-13T11:50:36Z) - Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus [13.276829763453433]
Large language models (LLMs) are capable of solving a wide range of tasks, yet they have struggled with reasoning.
We propose $textbfAdditional Logic Training (ALT)$, which aims to enhance LLMs' reasoning capabilities by program-generated logical reasoning samples.
arXiv Detail & Related papers (2024-11-19T13:31:53Z) - Generating $SROI^-$ Ontologies via Knowledge Graph Query Embedding Learning [26.905431434167536]
We propose a novel query embedding method, AConE, which explains the knowledge learned from the graph in the form of $SROI-$ description logic axioms.
AConE achieves superior results over previous baselines with fewer parameters.
We provide comprehensive analyses showing that the capability to represent axioms positively impacts the results of query answering.
arXiv Detail & Related papers (2024-07-12T12:20:39Z) - Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference [20.057611113206324]
We study how to subvert large language models (LLMs) from following prompt-specified rules.
We prove that although LLMs can faithfully follow such rules, maliciously crafted prompts can mislead even idealized, theoretically constructed models.
arXiv Detail & Related papers (2024-06-21T19:18:16Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)<n>This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic [14.503982715625902]
We study a synthetic corpus based approach for language models (LMs) to acquire logical deductive reasoning ability.
We adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way.
We empirically verify that LMs trained on FLD corpora acquire more generalizable reasoning ability.
arXiv Detail & Related papers (2023-08-11T13:15:35Z) - Lattice-preserving $\mathcal{ALC}$ ontology embeddings with saturation [50.05281461410368]
An order-preserving embedding method is proposed to generate embeddings of OWL representations.
We show that our method outperforms state-the-art theory-of-the-art embedding methods in several knowledge base completion tasks.
arXiv Detail & Related papers (2023-05-11T22:27:51Z) - APOLLO: A Simple Approach for Adaptive Pretraining of Language Models
for Logical Reasoning [73.3035118224719]
We propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities.
APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
arXiv Detail & Related papers (2022-12-19T07:40:02Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Reasoning on Multi-Relational Contextual Hierarchies via Answer Set
Programming with Algebraic Measures [13.245718532835864]
Contextualized Knowledge Repository (CKR) is rooted in description logics but links on the reasoning side strongly to logic programs.
We present a generalization of CKR hierarchies to multiple contextual relations, along with their interpretation of defeasible axioms and preference.
We show that for a relevant fragment of CKR hierarchies with multiple contextual relations, query answering can be realized with the popular asprin framework.
arXiv Detail & Related papers (2021-08-06T13:06:45Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.