Sequential Query Encoding For Complex Query Answering on Knowledge
Graphs
- URL: http://arxiv.org/abs/2302.13114v3
- Date: Sun, 25 Jun 2023 21:38:42 GMT
- Title: Sequential Query Encoding For Complex Query Answering on Knowledge
Graphs
- Authors: Jiaxin Bai, Tianshi Zheng, Yangqiu Song
- Abstract summary: We propose sequential query encoding (SQE) as an alternative to encode queries for knowledge graph (KG) reasoning.
SQE first uses a search-based algorithm to linearize the computational graph to a sequence of tokens and then uses a sequence encoder to compute its vector representation.
Despite its simplicity, SQE demonstrates state-of-the-art neural query encoding performance on FB15k, FB15k-237, and NELL.
- Score: 31.40820604209387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex Query Answering (CQA) is an important and fundamental task for
knowledge graph (KG) reasoning. Query encoding (QE) is proposed as a fast and
robust solution to CQA. In the encoding process, most existing QE methods first
parse the logical query into an executable computational direct-acyclic graph
(DAG), then use neural networks to parameterize the operators, and finally,
recursively execute these neuralized operators. However, the
parameterization-and-execution paradigm may be potentially over-complicated, as
it can be structurally simplified by a single neural network encoder.
Meanwhile, sequence encoders, like LSTM and Transformer, proved to be effective
for encoding semantic graphs in related tasks. Motivated by this, we propose
sequential query encoding (SQE) as an alternative to encode queries for CQA.
Instead of parameterizing and executing the computational graph, SQE first uses
a search-based algorithm to linearize the computational graph to a sequence of
tokens and then uses a sequence encoder to compute its vector representation.
Then this vector representation is used as a query embedding to retrieve
answers from the embedding space according to similarity scores. Despite its
simplicity, SQE demonstrates state-of-the-art neural query encoding performance
on FB15k, FB15k-237, and NELL on an extended benchmark including twenty-nine
types of in-distribution queries. Further experiment shows that SQE also
demonstrates comparable knowledge inference capability on out-of-distribution
queries, whose query types are not observed during the training process.
Related papers
- Effective Instruction Parsing Plugin for Complex Logical Query Answering on Knowledge Graphs [51.33342412699939]
Knowledge Graph Query Embedding (KGQE) aims to embed First-Order Logic (FOL) queries in a low-dimensional KG space for complex reasoning over incomplete KGs.
Recent studies integrate various external information (such as entity types and relation context) to better capture the logical semantics of FOL queries.
We propose an effective Query Instruction Parsing (QIPP) that captures latent query patterns from code-like query instructions.
arXiv Detail & Related papers (2024-10-27T03:18:52Z) - Conditional Logical Message Passing Transformer for Complex Query Answering [22.485655410582375]
We propose a new state-of-the-art neural CQA model, Conditional Logical Message Passing Transformer (CLMPT)
We empirically verified that this approach can reduce computational costs without affecting performance.
Experimental results show that CLMPT is a new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2024-02-20T12:17:01Z) - Query2Triple: Unified Query Encoding for Answering Diverse Complex
Queries over Knowledge Graphs [29.863085746761556]
We propose Query to Triple (Q2T), a novel approach that decouples the training for simple and complex queries.
Our proposed Q2T is not only efficient to train, but also modular, thus easily adaptable to various neural link predictors.
arXiv Detail & Related papers (2023-10-17T13:13:30Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - NQE: N-ary Query Embedding for Complex Query Answering over
Hyper-Relational Knowledge Graphs [1.415350927301928]
Complex query answering is an essential task for logical reasoning on knowledge graphs.
We propose a novel N-ary Query Embedding (NQE) model for CQA over hyper-relational knowledge graphs (HKGs)
NQE utilizes a dual-heterogeneous Transformer encoder and fuzzy logic theory to satisfy all n-ary FOL queries.
We generate a new CQA dataset WD50K-NFOL, including diverse n-ary FOL queries over WD50K.
arXiv Detail & Related papers (2022-11-24T08:26:18Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z) - Question Answering Infused Pre-training of General-Purpose
Contextualized Representations [70.62967781515127]
We propose a pre-training objective based on question answering (QA) for learning general-purpose contextual representations.
We accomplish this goal by training a bi-encoder QA model, which independently encodes passages and questions, to match the predictions of a more accurate cross-encoder model.
We show large improvements over both RoBERTa-large and previous state-of-the-art results on zero-shot and few-shot paraphrase detection.
arXiv Detail & Related papers (2021-06-15T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.