A Method of Query Graph Reranking for Knowledge Base Question Answering
- URL: http://arxiv.org/abs/2204.12808v1
- Date: Wed, 27 Apr 2022 09:57:54 GMT
- Title: A Method of Query Graph Reranking for Knowledge Base Question Answering
- Authors: Yonghui Jia, Wenliang Chen
- Abstract summary: This paper presents a novel reranking method to better choose the optimal query graph, a sub-graph of knowledge graph, to retrieve the answer for an input question in Knowledge Base Question Answering (KBQA)
Existing methods suffer from a severe problem that there is a significant gap between top-1 performance and the oracle score of top-n results.
- Score: 2.367061689316429
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel reranking method to better choose the optimal
query graph, a sub-graph of knowledge graph, to retrieve the answer for an
input question in Knowledge Base Question Answering (KBQA). Existing methods
suffer from a severe problem that there is a significant gap between top-1
performance and the oracle score of top-n results. To address this problem, our
method divides the choosing procedure into two steps: query graph ranking and
query graph reranking. In the first step, we provide top-n query graphs for
each question. Then we propose to rerank the top-n query graphs by combining
with the information of answer type. Experimental results on two widely used
datasets show that our proposed method achieves the best results on the
WebQuestions dataset and the second best on the ComplexQuestions dataset.
Related papers
- One Model, Any Conjunctive Query: Graph Neural Networks for Answering Complex Queries over Knowledge Graphs [7.34044245579928]
We propose AnyCQ, a graph neural network model that can classify answers to any conjunctive query on any knowledge graph.
We show that AnyCQ can generalize to large queries of arbitrary structure, reliably classifying and retrieving answers to samples where existing approaches fail.
arXiv Detail & Related papers (2024-09-21T00:30:44Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - Careful Selection and Thoughtful Discarding: Graph Explicit Pooling
Utilizing Discarded Nodes [53.08068729187698]
We introduce a novel Graph Explicit Pooling (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification.
We conduct comprehensive experiments across 12 widely used datasets to validate our proposed method's effectiveness.
arXiv Detail & Related papers (2023-11-21T14:44:51Z) - Better Query Graph Selection for Knowledge Base Question Answering [2.367061689316429]
This paper presents a novel approach based on semantic parsing to improve the performance of Knowledge Base Question Answering (KBQA)
Specifically, we focus on how to select an optimal query graph from a candidate set so as to retrieve the answer from knowledge base (KB)
arXiv Detail & Related papers (2022-04-27T01:53:06Z) - Question-Answer Sentence Graph for Joint Modeling Answer Selection [122.29142965960138]
We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs.
Online inference is then performed to solve the AS2 task on unseen queries.
arXiv Detail & Related papers (2022-02-16T05:59:53Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - Learning Query Expansion over the Nearest Neighbor Graph [94.80212602202518]
Graph Query Expansion (GQE) is presented, which is learned in a supervised manner and performs aggregation over an extended neighborhood of the query.
The technique achieves state-of-the-art results over known benchmarks.
arXiv Detail & Related papers (2021-12-05T19:48:42Z) - Graph-augmented Learning to Rank for Querying Large-scale Knowledge
Graph [34.774049199809426]
Knowledge graph question answering (i.e., KGQA) based on information retrieval aims to answer a question by retrieving answer from a large-scale knowledge graph.
We first propose to partition the retrieved KSG to several smaller sub-KSGs via a new subgraph partition algorithm.
We then present a graph-augmented learning to rank model to select the top-ranked sub-KSGs from them.
arXiv Detail & Related papers (2021-11-20T08:27:37Z) - Outlining and Filling: Hierarchical Query Graph Generation for Answering
Complex Questions over Knowledge Graph [16.26384829957165]
We propose a new two-stage approach to build query graphs.
In the first stage, the top-$k$ related instances are collected by simple strategies.
In the second stage, a graph generation model performs hierarchical generation.
arXiv Detail & Related papers (2021-11-01T07:08:46Z) - Approximate Knowledge Graph Query Answering: From Ranking to Binary
Classification [0.20999222360659608]
Structured querying on incomplete graphs will result in incomplete sets of answers.
Several algorithms for approximate structured query answering have been proposed.
We argue that performing a ranking-based evaluation is not sufficient to assess methods for complex query answering.
arXiv Detail & Related papers (2021-02-22T22:28:08Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.