On Expansion and Contraction of DL-Lite Knowledge Bases
- URL: http://arxiv.org/abs/2001.09365v1
- Date: Sat, 25 Jan 2020 21:58:32 GMT
- Title: On Expansion and Contraction of DL-Lite Knowledge Bases
- Authors: Dmitriy Zheleznyakov, Evgeny Kharlamov, Werner Nutt, Diego Calvanese
- Abstract summary: We investigate knowledge expansion and contraction for knowledge bases expressed in DL-Lite.
We show that well-known formula-based approaches are not appropriate for DL-Lite expansion and contraction.
We propose a novel formula-based approach that respects our principles and evolution is expressible in DL-Lite.
- Score: 9.168045898881292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge bases (KBs) are not static entities: new information constantly
appears and some of the previous knowledge becomes obsolete. In order to
reflect this evolution of knowledge, KBs should be expanded with the new
knowledge and contracted from the obsolete one. This problem is well-studied
for propositional but much less for first-order KBs. In this work we
investigate knowledge expansion and contraction for KBs expressed in DL-Lite, a
family of description logics (DLs) that underlie the tractable fragment OWL 2
QL of the Web Ontology Language OWL 2. We start with a novel knowledge
evolution framework and natural postulates that evolution should respect, and
compare our postulates to the well-established AGM postulates. We then review
well-known model and formula-based approaches for expansion and contraction for
propositional theories and show how they can be adapted to the case of DL-Lite.
In particular, we show intrinsic limitations of model-based approaches: besides
the fact that some of them do not respect the postulates we have established,
they ignore the structural properties of KBs. This leads to undesired
properties of evolution results: evolution of DL-Lite KBs cannot be captured in
DL-Lite. Moreover, we show that well-known formula-based approaches are also
not appropriate for DL-Lite expansion and contraction: they either have a high
complexity of computation, or they produce logical theories that cannot be
expressed in DL-Lite. Thus, we propose a novel formula-based approach that
respects our principles and for which evolution is expressible in DL-Lite. For
this approach we also propose
polynomial time deterministic algorithms to compute evolution of DL-Lite KBs
when evolution affects only factual data.
Related papers
- Large Language Models as Reliable Knowledge Bases? [60.25969380388974]
Large Language Models (LLMs) can be viewed as potential knowledge bases (KBs)
This study defines criteria that a reliable LLM-as-KB should meet, focusing on factuality and consistency.
strategies like ICL and fine-tuning are unsuccessful at making LLMs better KBs.
arXiv Detail & Related papers (2024-07-18T15:20:18Z) - Knowledge Verification to Nip Hallucination in the Bud [69.79051730580014]
We demonstrate the feasibility of mitigating hallucinations by verifying and minimizing the inconsistency between external knowledge present in the alignment data and the intrinsic knowledge embedded within foundation LLMs.
We propose a novel approach called Knowledge Consistent Alignment (KCA), which employs a well-aligned LLM to automatically formulate assessments based on external knowledge.
We demonstrate the superior efficacy of KCA in reducing hallucinations across six benchmarks, utilizing foundation LLMs of varying backbones and scales.
arXiv Detail & Related papers (2024-01-19T15:39:49Z) - Knowledge Prompting in Pre-trained Language Model for Natural Language
Understanding [24.315130086787374]
We propose a knowledge-prompting-based PLM framework KP-PLM.
This framework can be flexibly combined with existing mainstream PLMs.
To further leverage the factual knowledge from these prompts, we propose two novel knowledge-aware self-supervised tasks.
arXiv Detail & Related papers (2022-10-16T13:36:57Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - A Review on Language Models as Knowledge Bases [55.035030134703995]
Recently, there has been a surge of interest in the NLP community on the use of pretrained Language Models (LMs) as Knowledge Bases (KBs)
arXiv Detail & Related papers (2022-04-12T18:35:23Z) - EvoLearner: Learning Description Logics with Evolutionary Algorithms [2.0096667731426976]
Classifying nodes in knowledge graphs is an important task, e.g., predicting missing types of entities, predicting which molecules cause cancer, or predicting which drugs are promising treatment candidates.
We propose EvoLearner - an evolutionary approach to learn description logic concepts from positive and negative examples.
arXiv Detail & Related papers (2021-11-08T23:47:39Z) - Language Models As or For Knowledge Bases [30.089955948497405]
We identify strengths and limitations of pre-trained language models (LMs) and explicit knowledge bases (KBs)
We argue that latent LMs are not suitable as a substitute for explicit KBs, but could play a major role for augmenting and curating KBs.
arXiv Detail & Related papers (2021-10-10T20:00:09Z) - Unsupervised Pre-training with Structured Knowledge for Improving
Natural Language Inference [22.648536283569747]
We propose models that leverage structured knowledge in different components of pre-trained models.
Our results show that the proposed models perform better than previous BERT-based state-of-the-art models.
arXiv Detail & Related papers (2021-09-08T21:28:12Z) - Reasoning Over Virtual Knowledge Bases With Open Predicate Relations [85.19305347984515]
We present the Open Predicate Query Language (OPQL)
OPQL is a method for constructing a virtual Knowledge Base (VKB) trained entirely from text.
We demonstrate that OPQL outperforms prior VKB methods on two different KB reasoning tasks.
arXiv Detail & Related papers (2021-02-14T01:29:54Z) - BoxE: A Box Embedding Model for Knowledge Base Completion [53.57588201197374]
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB)
Existing embedding models are subject to at least one of the following limitations.
BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes)
arXiv Detail & Related papers (2020-07-13T09:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.