UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph
- URL: http://arxiv.org/abs/2212.00959v1
- Date: Fri, 2 Dec 2022 04:08:09 GMT
- Title: UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph
- Authors: Jinhao Jiang, Kun Zhou, Wayne Xin Zhao and Ji-Rong Wen
- Abstract summary: Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
- Score: 89.98762327725112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-hop Question Answering over Knowledge Graph~(KGQA) aims to find the
answer entities that are multiple hops away from the topic entities mentioned
in a natural language question on a large-scale Knowledge Graph (KG). To cope
with the vast search space, existing work usually adopts a two-stage approach:
it firstly retrieves a relatively small subgraph related to the question and
then performs the reasoning on the subgraph to accurately find the answer
entities. Although these two stages are highly related, previous work employs
very different technical solutions for developing the retrieval and reasoning
models, neglecting their relatedness in task essence. In this paper, we propose
UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and
reasoning in both model architecture and parameter learning. For model
architecture, UniKGQA consists of a semantic matching module based on a
pre-trained language model~(PLM) for question-relation semantic matching, and a
matching information propagation module to propagate the matching information
along the edges on KGs. For parameter learning, we design a shared pre-training
task based on question-relation matching for both retrieval and reasoning
models, and then propose retrieval- and reasoning-oriented fine-tuning
strategies. Compared with previous studies, our approach is more unified,
tightly relating the retrieval and reasoning stages. Extensive experiments on
three benchmark datasets have demonstrated the effectiveness of our method on
the multi-hop KGQA task. Our codes and data are publicly available at
https://github.com/RUCAIBox/UniKGQA.
Related papers
- FedCQA: Answering Complex Queries on Multi-Source Knowledge Graphs via
Federated Learning [55.02512821257247]
Complex logical query answering is a challenging task in knowledge graphs (KGs)
Recent approaches are proposed to represent KG entities into embedding vectors and find answers to logical queries from the KGs.
It remains unknown how to answer queries on multi-source KGs.
arXiv Detail & Related papers (2024-02-22T14:57:44Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Retrieval-Generation Synergy Augmented Large Language Models [30.53260173572783]
We propose an iterative retrieval-generation collaborative framework.
We conduct experiments on four question answering datasets, including single-hop QA and multi-hop QA tasks.
arXiv Detail & Related papers (2023-10-08T12:50:57Z) - Semantic Parsing for Conversational Question Answering over Knowledge
Graphs [63.939700311269156]
We develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof.
We present two different semantic parsing approaches and highlight the challenges of the task.
Our dataset and models are released at https://github.com/Edinburgh/SPICE.
arXiv Detail & Related papers (2023-01-28T14:45:11Z) - Enhancing Multi-modal and Multi-hop Question Answering via Structured
Knowledge and Unified Retrieval-Generation [33.56304858796142]
Multi-modal multi-hop question answering involves answering a question by reasoning over multiple input sources from different modalities.
Existing methods often retrieve evidences separately and then use a language model to generate an answer based on the retrieved evidences.
We propose a Structured Knowledge and Unified Retrieval-Generation (RG) approach to address these issues.
arXiv Detail & Related papers (2022-12-16T18:12:04Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z) - Improving Embedded Knowledge Graph Multi-hop Question Answering by
introducing Relational Chain Reasoning [8.05076085499457]
Knowledge Base Question Answer (KBQA) to answer userquestions from a knowledge base (KB) by identifying reasoning between topic entity and answer.
As a complex branchtask of KBQA, multi-hop KGQA requires reasoning over multi-hop relational chains preserved in structured KG.
arXiv Detail & Related papers (2021-10-25T06:53:02Z) - Query Embedding on Hyper-relational Knowledge Graphs [0.4779196219827507]
Multi-hop logical reasoning is an established problem in the field of representation learning on knowledge graphs.
We extend the multi-hop reasoning problem to hyper-relational KGs allowing to tackle this new type of complex queries.
arXiv Detail & Related papers (2021-06-15T14:08:50Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.