TSQA: Tabular Scenario Based Question Answering
- URL: http://arxiv.org/abs/2101.11429v1
- Date: Thu, 14 Jan 2021 02:00:33 GMT
- Title: TSQA: Tabular Scenario Based Question Answering
- Authors: Xiao Li, Yawei Sun, Gong Cheng
- Abstract summary: scenario-based question answering (SQA) has attracted an increasing research interest.
To support the study of this task, we construct GeoTSQA.
We extend state-of-the-art MRC methods with TTGen, a novel table-to-text generator.
- Score: 14.92495213480887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scenario-based question answering (SQA) has attracted an increasing research
interest. Compared with the well-studied machine reading comprehension (MRC),
SQA is a more challenging task: a scenario may contain not only a textual
passage to read but also structured data like tables, i.e., tabular scenario
based question answering (TSQA). AI applications of TSQA such as answering
multiple-choice questions in high-school exams require synthesizing data in
multiple cells and combining tables with texts and domain knowledge to infer
answers. To support the study of this task, we construct GeoTSQA. This dataset
contains 1k real questions contextualized by tabular scenarios in the geography
domain. To solve the task, we extend state-of-the-art MRC methods with TTGen, a
novel table-to-text generator. It generates sentences from variously
synthesized tabular data and feeds the downstream MRC method with the most
useful sentences. Its sentence ranking model fuses the information in the
scenario, question, and domain knowledge. Our approach outperforms a variety of
strong baseline methods on GeoTSQA.
Related papers
- PACIFIC: Towards Proactive Conversational Question Answering over
Tabular and Textual Data in Finance [96.06505049126345]
We present a new dataset, named PACIFIC. Compared with existing CQA datasets, PACIFIC exhibits three key features: (i) proactivity, (ii) numerical reasoning, and (iii) hybrid context of tables and text.
A new task is defined accordingly to study Proactive Conversational Question Answering (PCQA), which combines clarification question generation and CQA.
UniPCQA performs multi-task learning over all sub-tasks in PCQA and incorporates a simple ensemble strategy to alleviate the error propagation issue in the multi-task learning by cross-validating top-$k$ sampled Seq2Seq
arXiv Detail & Related papers (2022-10-17T08:06:56Z) - Activity report analysis with automatic single or multispan answer
extraction [0.21485350418225244]
We create a new smart home environment dataset comprised of questions paired with single-span or multi-span answers depending on the question and context queried.
Our experiments show that the proposed model outperforms state-of-the-art QA models on our dataset.
arXiv Detail & Related papers (2022-09-09T06:33:29Z) - Towards Complex Document Understanding By Discrete Reasoning [77.91722463958743]
Document Visual Question Answering (VQA) aims to understand visually-rich documents to answer questions in natural language.
We introduce a new Document VQA dataset, named TAT-DQA, which consists of 3,067 document pages and 16,558 question-answer pairs.
We develop a novel model named MHST that takes into account the information in multi-modalities, including text, layout and visual image, to intelligently address different types of questions.
arXiv Detail & Related papers (2022-07-25T01:43:19Z) - TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and
Textual Content in Finance [71.76018597965378]
We build a new large-scale Question Answering dataset containing both Tabular And Textual data, named TAT-QA.
We propose a novel QA model termed TAGOP, which is capable of reasoning over both tables and text.
arXiv Detail & Related papers (2021-05-17T06:12:06Z) - FeTaQA: Free-form Table Question Answering [33.018256483762386]
We introduce FeTaQA, a new dataset with 10K Wikipedia-based table, question, free-form answer, supporting table cells pairs.
FeTaQA yields a more challenging table question answering setting because it requires generating free-form text answers after retrieval, inference, and integration of multiple discontinuous facts from a structured knowledge source.
arXiv Detail & Related papers (2021-04-01T09:59:40Z) - ComQA:Compositional Question Answering via Hierarchical Graph Neural
Networks [47.12013005600986]
We present a large-scale compositional question answering dataset containing more than 120k human-labeled questions.
To tackle the ComQA problem, we proposed a hierarchical graph neural networks, which represents the document from the low-level word to the high-level sentence.
Our proposed model achieves a significant improvement over previous machine reading comprehension methods and pre-training methods.
arXiv Detail & Related papers (2021-01-16T08:23:27Z) - XTQA: Span-Level Explanations of the Textbook Question Answering [32.67922842489546]
Textbook Question Answering (TQA) is a task that one should answer a diagram/non-diagram question given a large multi-modal context.
We propose a novel architecture towards span-level eXplanations of the TQA based on our proposed coarse-to-fine grained algorithm.
Experimental results show that XTQA significantly improves the state-of-the-art performance compared with baselines.
arXiv Detail & Related papers (2020-11-25T11:44:12Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.