Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases
- URL: http://arxiv.org/abs/2306.09138v3
- Date: Tue, 10 Sep 2024 15:58:07 GMT
- Title: Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases
- Authors: Riccardo Zese, Evelina Lamma, Fabrizio Riguzzi,
- Abstract summary: We exploit an existing probabilistic semantics called DISPONTE to overcome this problem.
We implement our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal.
- Score: 0.3277163122167433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The necessity to manage inconsistency in Description Logics Knowledge Bases (KBs) has come to the fore with the increasing importance gained by the Semantic Web, where information comes from different sources that constantly change their content and may contain contradictory descriptions when considered either alone or together. Classical reasoning algorithms do not handle inconsistent KBs, forcing the debugging of the KB in order to remove the inconsistency. In this paper, we exploit an existing probabilistic semantics called DISPONTE to overcome this problem and allow queries also in case of inconsistent KBs. We implemented our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal. Moreover, we formally compare the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks.
Related papers
- New Rules for Causal Identification with Background Knowledge [59.733125324672656]
We propose two novel rules for incorporating BK, which offer a new perspective to the open problem.
We show that these rules are applicable in some typical causality tasks, such as determining the set of possible causal effects with observational data.
arXiv Detail & Related papers (2024-07-21T20:21:21Z) - From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - Eliminating Unintended Stable Fixpoints for Hybrid Reasoning Systems [5.208405959764274]
We introduce a methodology resembling AFT that can utilize priorly computed upper bounds to more precisely capture semantics.
We demonstrate our framework's applicability to hybrid MKNF (minimal knowledge and negation as failure) knowledge bases by extending the state-of-the-art approximator.
arXiv Detail & Related papers (2023-07-21T01:08:15Z) - Interpretable Automatic Fine-grained Inconsistency Detection in Text
Summarization [56.94741578760294]
We propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary.
Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact.
arXiv Detail & Related papers (2023-05-23T22:11:47Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Revisiting the Prepositional-Phrase Attachment Problem Using Explicit
Commonsense Knowledge [1.0312968200748118]
We argue that explicit commonsense knowledge bases can provide an essential ingredient for making good attachment decisions.
Our results suggest that the commonsense knowledge-based approach can provide the best of both worlds, integrating rule-based and statistical techniques.
arXiv Detail & Related papers (2021-02-01T15:48:36Z) - Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical
Explanations [0.9023847175654603]
Braid is a novel logical reasoner that supports probabilistic rules.
We describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework.
We evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results.
arXiv Detail & Related papers (2020-11-26T15:36:06Z) - Reasoning with Contextual Knowledge and Influence Diagrams [4.111899441919165]
Influence diagrams (IDs) are well-known formalisms extending Bayesian networks to model decision situations under uncertainty.
We complement IDs with the light-weight description logic (DL) EL to overcome such limitations.
arXiv Detail & Related papers (2020-07-01T15:57:48Z) - Faithful Embeddings for Knowledge Base Queries [97.5904298152163]
deductive closure of an ideal knowledge base (KB) contains exactly the logical queries that the KB can answer.
In practice KBs are both incomplete and over-specified, failing to answer some queries that have real-world answers.
We show that inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
arXiv Detail & Related papers (2020-04-07T19:25:16Z) - Correcting Knowledge Base Assertions [26.420502742339053]
The usefulness and usability of knowledge bases (KBs) is often limited by quality issues.
One common issue is the presence of erroneous assertions, often caused by lexical or semantic confusion.
We study the problem of correcting such assertions, and present a general correction framework which combines lexical matching, semantic embedding, soft constraint mining and semantic consistency checking.
arXiv Detail & Related papers (2020-01-19T23:03:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.