Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases
- URL: http://arxiv.org/abs/2306.09138v4
- Date: Tue, 21 Jan 2025 09:56:29 GMT
- Title: Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases
- Authors: Riccardo Zese, Evelina Lamma, Fabrizio Riguzzi,
- Abstract summary: We exploit an existing probabilistic semantics called DISPONTE to overcome this problem.
We implement our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal.
- Score: 0.3277163122167433
- License:
- Abstract: The necessity to manage inconsistency in Description Logics Knowledge Bases (KBs) has come to the fore with the increasing importance gained by the Semantic Web, where information comes from different sources that constantly change their content and may contain contradictory descriptions when considered either alone or together. Classical reasoning algorithms do not handle inconsistent KBs, forcing the debugging of the KB in order to remove the inconsistency. In this paper, we exploit an existing probabilistic semantics called DISPONTE to overcome this problem and allow queries also in case of inconsistent KBs. We implemented our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal. Moreover, we formally compare the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks.
Related papers
- Subjective Logic Encodings [1.930852251165745]
Data perspectivism seeks to leverage inter-annotator disagreement to learn models.
Subjective Logic SLEs is a framework for constructing classification targets that explicitly encodes annotations as opinions of the annotators.
arXiv Detail & Related papers (2025-02-17T15:14:10Z) - Dialogue-based Explanations for Logical Reasoning using Structured Argumentation [0.06138671548064355]
We identify structural weaknesses of the state-of-the-art and propose a generic argumentation-based approach to address these problems.
Our work provides dialogue models as dialectic-proof procedures to compute and explain a query answer.
This allows us to construct dialectical proof trees as explanations, which are more expressive and arguably more intuitive than existing explanation formalisms.
arXiv Detail & Related papers (2025-02-16T22:26:18Z) - Few-shot Policy (de)composition in Conversational Question Answering [54.259440408606515]
We propose a neuro-symbolic framework to detect policy compliance using large language models (LLMs) in a few-shot setting.
We show that our approach soundly reasons about policy compliance conversations by extracting sub-questions to be answered, assigning truth values from contextual information, and explicitly producing a set of logic statements from the given policies.
We apply this approach to the popular PCD and conversational machine reading benchmark, ShARC, and show competitive performance with no task-specific finetuning.
arXiv Detail & Related papers (2025-01-20T08:40:15Z) - New Rules for Causal Identification with Background Knowledge [59.733125324672656]
We propose two novel rules for incorporating BK, which offer a new perspective to the open problem.
We show that these rules are applicable in some typical causality tasks, such as determining the set of possible causal effects with observational data.
arXiv Detail & Related papers (2024-07-21T20:21:21Z) - Interpretable Automatic Fine-grained Inconsistency Detection in Text
Summarization [56.94741578760294]
We propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary.
Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact.
arXiv Detail & Related papers (2023-05-23T22:11:47Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Revisiting the Prepositional-Phrase Attachment Problem Using Explicit
Commonsense Knowledge [1.0312968200748118]
We argue that explicit commonsense knowledge bases can provide an essential ingredient for making good attachment decisions.
Our results suggest that the commonsense knowledge-based approach can provide the best of both worlds, integrating rule-based and statistical techniques.
arXiv Detail & Related papers (2021-02-01T15:48:36Z) - Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical
Explanations [0.9023847175654603]
Braid is a novel logical reasoner that supports probabilistic rules.
We describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework.
We evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results.
arXiv Detail & Related papers (2020-11-26T15:36:06Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z) - Correcting Knowledge Base Assertions [26.420502742339053]
The usefulness and usability of knowledge bases (KBs) is often limited by quality issues.
One common issue is the presence of erroneous assertions, often caused by lexical or semantic confusion.
We study the problem of correcting such assertions, and present a general correction framework which combines lexical matching, semantic embedding, soft constraint mining and semantic consistency checking.
arXiv Detail & Related papers (2020-01-19T23:03:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.