Frontiers and Exact Learning of ELI Queries under DL-Lite Ontologies
- URL: http://arxiv.org/abs/2204.14172v1
- Date: Fri, 29 Apr 2022 15:56:45 GMT
- Title: Frontiers and Exact Learning of ELI Queries under DL-Lite Ontologies
- Authors: Maurice Funk, Jean Christoph Jung and Carsten Lutz
- Abstract summary: We study ELI queries (ELIQs) in the presence of the logic description DL-Lite.
For the dialect DL-Lite, we show that ELIQs have frontier (set of least general generalizations) that is of size and can be computed in time.
In the dialect DL-LiteF, in contrast, frontiers may be infinite.
- Score: 21.18670404741191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study ELI queries (ELIQs) in the presence of ontologies formulated in the
description logic DL-Lite. For the dialect DL-LiteH, we show that ELIQs have a
frontier (set of least general generalizations) that is of polynomial size and
can be computed in polynomial time. In the dialect DL-LiteF, in contrast,
frontiers may be infinite. We identify a natural syntactic restriction that
enables the same positive results as for DL-LiteH. We use out results on
frontiers to show that ELIQs are learnable in polynomial time in the presence
of a DL-LiteH / restricted DL-LiteF ontology in Angluin's framework of exact
learning with only membership queries.
Related papers
- Understanding and Mitigating Language Confusion in LLMs [76.96033035093204]
We evaluate 15 typologically diverse languages with existing and newly-created English and multilingual prompts.
We find that Llama Instruct and Mistral models exhibit high degrees of language confusion.
We find that language confusion can be partially mitigated via few-shot prompting, multilingual SFT and preference tuning.
arXiv Detail & Related papers (2024-06-28T17:03:51Z) - Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals [53.273592543786705]
Large language models (LLMs) have achieved great success, but their occasional content fabrication, or hallucination, limits their practical application.
We propose CoKE, which first probes LLMs' knowledge boundary via internal confidence given a set of questions, and then leverages the probing results to elicit the expression of the knowledge boundary.
arXiv Detail & Related papers (2024-06-16T10:07:20Z) - Prompting Large Language Models with Knowledge Graphs for Question Answering Involving Long-tail Facts [50.06633829833144]
Large Language Models (LLMs) are effective in performing various NLP tasks, but struggle to handle tasks that require extensive, real-world knowledge.
We propose a benchmark that requires knowledge of long-tail facts for answering the involved questions.
Our experiments show that LLMs alone struggle with answering these questions, especially when the long-tail level is high or rich knowledge is required.
arXiv Detail & Related papers (2024-05-10T15:10:20Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations [52.43593893122206]
Alignedcot is an in-context learning technique for invoking Large Language Models.
It achieves consistent and correct step-wise prompts in zero-shot scenarios.
We conduct experiments on mathematical reasoning and commonsense reasoning.
arXiv Detail & Related papers (2023-11-22T17:24:21Z) - Do Large Language Models Know about Facts? [60.501902866946]
Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks.
We aim to evaluate the extent and scope of factual knowledge within LLMs by designing the benchmark Pinocchio.
Pinocchio contains 20K diverse factual questions that span different sources, timelines, domains, regions, and languages.
arXiv Detail & Related papers (2023-10-08T14:26:55Z) - Querying Circumscribed Description Logic Knowledge Bases [9.526604375441073]
Circumscription is one of the main approaches for defining non-monotonic description logics.
We prove decidability of (U)CQ evaluation on circumscribed DL KBs.
We also study the much simpler atomic queries (AQs)
arXiv Detail & Related papers (2023-06-07T15:50:15Z) - Expressivity of Planning with Horn Description Logic Ontologies
(Technical Report) [12.448670165713652]
We address open-world state constraints formalized by planning over a description logic (DL) ontology.
We propose a novel compilation scheme into standard PDDL with derived predicates.
We show that our approach can outperform previous work on existing benchmarks for planning with DL.
arXiv Detail & Related papers (2022-03-17T14:50:06Z) - Actively Learning Concepts and Conjunctive Queries under ELr-Ontologies [22.218000867486726]
We show that EL-concepts are not query learnable in the presence of ELI-ontologies.
We also show that EL-concepts are not query learnable in the presence of ELI-ontologies.
arXiv Detail & Related papers (2021-05-18T07:45:37Z) - When is Ontology-Mediated Querying Efficient? [10.971122842236024]
We study the evaluation of ontology-mediated queries over relational databases.
We provide a characterization of the classes of OMQs that are tractable in combined complexity.
We also study the complexity of deciding whether a given OMQ is equivalent to an OMQ of bounded tree width.
arXiv Detail & Related papers (2020-03-17T16:32:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.