ComQA:Compositional Question Answering via Hierarchical Graph Neural
Networks
- URL: http://arxiv.org/abs/2101.06400v1
- Date: Sat, 16 Jan 2021 08:23:27 GMT
- Title: ComQA:Compositional Question Answering via Hierarchical Graph Neural
Networks
- Authors: Bingning Wang, Ting Yao, Weipeng Chen, Jingfang Xu and Xiaochuan Wang
- Abstract summary: We present a large-scale compositional question answering dataset containing more than 120k human-labeled questions.
To tackle the ComQA problem, we proposed a hierarchical graph neural networks, which represents the document from the low-level word to the high-level sentence.
Our proposed model achieves a significant improvement over previous machine reading comprehension methods and pre-training methods.
- Score: 47.12013005600986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of deep learning techniques and large scale datasets,
the question answering (QA) systems have been quickly improved, providing more
accurate and satisfying answers. However, current QA systems either focus on
the sentence-level answer, i.e., answer selection, or phrase-level answer,
i.e., machine reading comprehension. How to produce compositional answers has
not been throughout investigated. In compositional question answering, the
systems should assemble several supporting evidence from the document to
generate the final answer, which is more difficult than sentence-level or
phrase-level QA. In this paper, we present a large-scale compositional question
answering dataset containing more than 120k human-labeled questions. The answer
in this dataset is composed of discontiguous sentences in the corresponding
document. To tackle the ComQA problem, we proposed a hierarchical graph neural
networks, which represents the document from the low-level word to the
high-level sentence. We also devise a question selection and node selection
task for pre-training. Our proposed model achieves a significant improvement
over previous machine reading comprehension methods and pre-training methods.
Codes and dataset can be found at \url{https://github.com/benywon/ComQA}.
Related papers
- Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Modern Question Answering Datasets and Benchmarks: A Survey [5.026863544662493]
Question Answering (QA) is one of the most important natural language processing (NLP) tasks.
It aims using NLP technologies to generate a corresponding answer to a given question based on the massive unstructured corpus.
In this paper, we investigate influential QA datasets that have been released in the era of deep learning.
arXiv Detail & Related papers (2022-06-30T05:53:56Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - UNIQORN: Unified Question Answering over RDF Knowledge Graphs and Natural Language Text [20.1784368017206]
Question answering over RDF data like knowledge graphs has been greatly advanced.
IR and NLP communities have addressed QA over text, but such systems barely utilize semantic data and knowledge.
This paper presents a method for complex questions that can seamlessly operate over a mixture of RDF datasets and text corpora.
arXiv Detail & Related papers (2021-08-19T10:50:52Z) - PeCoQ: A Dataset for Persian Complex Question Answering over Knowledge
Graph [0.0]
This paper introduces textitPeCoQ, a dataset for Persian question answering.
This dataset contains 10,000 complex questions and answers extracted from the Persian knowledge graph, FarsBase.
There are different types of complexities in the dataset, such as multi-relation, multi-entity, ordinal, and temporal constraints.
arXiv Detail & Related papers (2021-06-27T08:21:23Z) - TSQA: Tabular Scenario Based Question Answering [14.92495213480887]
scenario-based question answering (SQA) has attracted an increasing research interest.
To support the study of this task, we construct GeoTSQA.
We extend state-of-the-art MRC methods with TTGen, a novel table-to-text generator.
arXiv Detail & Related papers (2021-01-14T02:00:33Z) - XTQA: Span-Level Explanations of the Textbook Question Answering [32.67922842489546]
Textbook Question Answering (TQA) is a task that one should answer a diagram/non-diagram question given a large multi-modal context.
We propose a novel architecture towards span-level eXplanations of the TQA based on our proposed coarse-to-fine grained algorithm.
Experimental results show that XTQA significantly improves the state-of-the-art performance compared with baselines.
arXiv Detail & Related papers (2020-11-25T11:44:12Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - ClarQ: A large-scale and diverse dataset for Clarification Question
Generation [67.1162903046619]
We devise a novel bootstrapping framework that assists in the creation of a diverse, large-scale dataset of clarification questions based on postcomments extracted from stackexchange.
We quantitatively demonstrate the utility of the newly created dataset by applying it to the downstream task of question-answering.
We release this dataset in order to foster research into the field of clarification question generation with the larger goal of enhancing dialog and question answering systems.
arXiv Detail & Related papers (2020-06-10T17:56:50Z) - Semantic Graphs for Generating Deep Questions [98.5161888878238]
We propose a novel framework which first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN)
On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance.
arXiv Detail & Related papers (2020-04-27T10:52:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.