Benchmarking the Combinatorial Generalizability of Complex Query
Answering on Knowledge Graphs
- URL: http://arxiv.org/abs/2109.08925v1
- Date: Sat, 18 Sep 2021 12:58:55 GMT
- Title: Benchmarking the Combinatorial Generalizability of Complex Query
Answering on Knowledge Graphs
- Authors: Zihao Wang, Hang Yin, Yangqiu Song
- Abstract summary: EFO-1-QA is a new dataset to benchmark the generalizability of CQA models.
Our work, for the first time, provides a benchmark to evaluate and analyze the impact of different operators.
- Score: 43.002468461711715
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Complex Query Answering (CQA) is an important reasoning task on knowledge
graphs. Current CQA learning models have been shown to be able to generalize
from atomic operators to more complex formulas, which can be regarded as the
combinatorial generalizability. In this paper, we present EFO-1-QA, a new
dataset to benchmark the combinatorial generalizability of CQA models by
including 301 different queries types, which is 20 times larger than existing
datasets. Besides, our work, for the first time, provides a benchmark to
evaluate and analyze the impact of different operators and normal forms by
using (a) 7 choices of the operator systems and (b) 9 forms of complex queries.
Specifically, we provide the detailed study of the combinatorial
generalizability of two commonly used operators, i.e., projection and
intersection, and justify the impact of the forms of queries given the
canonical choice of operators. Our code and data can provide an effective
pipeline to benchmark CQA models.
Related papers
- Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity [59.57065228857247]
Retrieval-augmented Large Language Models (LLMs) have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA)
We propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs based on the query complexity.
We validate our model on a set of open-domain QA datasets, covering multiple query complexities, and show that ours enhances the overall efficiency and accuracy of QA systems.
arXiv Detail & Related papers (2024-03-21T13:52:30Z) - Meta Operator for Complex Query Answering on Knowledge Graphs [58.340159346749964]
We argue that different logical operator types, rather than the different complex query types, are the key to improving generalizability.
We propose a meta-learning algorithm to learn the meta-operators with limited data and adapt them to different instances of operators under various complex queries.
Empirical results show that learning meta-operators is more effective than learning original CQA or meta-CQA models.
arXiv Detail & Related papers (2024-03-15T08:54:25Z) - Type-based Neural Link Prediction Adapter for Complex Query Answering [2.1098688291287475]
We propose TypE-based Neural Link Prediction Adapter (TENLPA), a novel model that constructs type-based entity-relation graphs.
In order to effectively combine type information with complex logical queries, an adaptive learning mechanism is introduced.
Experiments on 3 standard datasets show that TENLPA model achieves state-of-the-art performance on complex query answering.
arXiv Detail & Related papers (2024-01-29T10:54:28Z) - Query2Triple: Unified Query Encoding for Answering Diverse Complex
Queries over Knowledge Graphs [29.863085746761556]
We propose Query to Triple (Q2T), a novel approach that decouples the training for simple and complex queries.
Our proposed Q2T is not only efficient to train, but also modular, thus easily adaptable to various neural link predictors.
arXiv Detail & Related papers (2023-10-17T13:13:30Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - NQE: N-ary Query Embedding for Complex Query Answering over
Hyper-Relational Knowledge Graphs [1.415350927301928]
Complex query answering is an essential task for logical reasoning on knowledge graphs.
We propose a novel N-ary Query Embedding (NQE) model for CQA over hyper-relational knowledge graphs (HKGs)
NQE utilizes a dual-heterogeneous Transformer encoder and fuzzy logic theory to satisfy all n-ary FOL queries.
We generate a new CQA dataset WD50K-NFOL, including diverse n-ary FOL queries over WD50K.
arXiv Detail & Related papers (2022-11-24T08:26:18Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.