Actively Learning Concepts and Conjunctive Queries under ELr-Ontologies
- URL: http://arxiv.org/abs/2105.08326v2
- Date: Wed, 19 May 2021 11:36:06 GMT
- Title: Actively Learning Concepts and Conjunctive Queries under ELr-Ontologies
- Authors: Maurice Funk, Jean Christoph Jung, Carsten Lutz
- Abstract summary: We show that EL-concepts are not query learnable in the presence of ELI-ontologies.
We also show that EL-concepts are not query learnable in the presence of ELI-ontologies.
- Score: 22.218000867486726
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem to learn a concept or a query in the presence of an
ontology formulated in the description logic ELr, in Angluin's framework of
active learning that allows the learning algorithm to interactively query an
oracle (such as a domain expert). We show that the following can be learned in
polynomial time: (1) EL-concepts, (2) symmetry-free ELI-concepts, and (3)
conjunctive queries (CQs) that are chordal, symmetry-free, and of bounded
arity. In all cases, the learner can pose to the oracle membership queries
based on ABoxes and equivalence queries that ask whether a given concept/query
from the considered class is equivalent to the target. The restriction to
bounded arity in (3) can be removed when we admit unrestricted CQs in
equivalence queries. We also show that EL-concepts are not polynomial query
learnable in the presence of ELI-ontologies.
Related papers
- LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Querying Circumscribed Description Logic Knowledge Bases [9.526604375441073]
Circumscription is one of the main approaches for defining non-monotonic description logics.
We prove decidability of (U)CQ evaluation on circumscribed DL KBs.
We also study the much simpler atomic queries (AQs)
arXiv Detail & Related papers (2023-06-07T15:50:15Z) - Complex Query Answering on Eventuality Knowledge Graph with Implicit
Logical Constraints [48.831178420807646]
We propose a new framework to leverage neural methods to answer complex logical queries based on an EVentuality-centric KG.
Complex Eventuality Query Answering (CEQA) considers the implicit logical constraints governing the temporal order and occurrence of eventualities.
We also propose a Memory-Enhanced Query (MEQE) to significantly improve the performance of state-of-the-art neural query encoders on the CEQA task.
arXiv Detail & Related papers (2023-05-30T14:29:24Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - On the non-efficient PAC learnability of conjunctive queries [18.851061569487616]
This note provides a self-contained exposition of the fact that conjunctive queries are not efficiently learnable.
We also establish a strong negative PAC learnability result that applies to many restricted classes of conjunctive queries (CQs)
We show that CQs (and UCQs) are efficiently learnable with membership queries.
arXiv Detail & Related papers (2022-08-22T12:29:02Z) - Frontiers and Exact Learning of ELI Queries under DL-Lite Ontologies [21.18670404741191]
We study ELI queries (ELIQs) in the presence of the logic description DL-Lite.
For the dialect DL-Lite, we show that ELIQs have frontier (set of least general generalizations) that is of size and can be computed in time.
In the dialect DL-LiteF, in contrast, frontiers may be infinite.
arXiv Detail & Related papers (2022-04-29T15:56:45Z) - CQE in Description Logics Through Instance Indistinguishability
(extended version) [0.0]
We study privacy-preserving query answering in Description Logics (DLs)
We derive data complexity results query answering over DL-Lite$_mathcal$$.
We identify a semantically well-founded notion of approximated confidentiality answering for CQE.
arXiv Detail & Related papers (2020-04-24T17:28:24Z) - When is Ontology-Mediated Querying Efficient? [10.971122842236024]
We study the evaluation of ontology-mediated queries over relational databases.
We provide a characterization of the classes of OMQs that are tractable in combined complexity.
We also study the complexity of deciding whether a given OMQ is equivalent to an OMQ of bounded tree width.
arXiv Detail & Related papers (2020-03-17T16:32:00Z) - VQA-LOL: Visual Question Answering under the Lens of Logic [58.30291671877342]
We investigate whether visual question answering systems trained to answer a question about an image, are able to answer the logical composition of multiple such questions.
We construct an augmentation of the VQA dataset as a benchmark, with questions containing logical compositions and linguistic transformations.
We propose our Lens of Logic (LOL) model which uses question-attention and logic-attention to understand logical connectives in the question, and a novel Fr'echet-Compatibility Loss.
arXiv Detail & Related papers (2020-02-19T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.