XTQA: Span-Level Explanations of the Textbook Question Answering
- URL: http://arxiv.org/abs/2011.12662v4
- Date: Mon, 24 Jul 2023 13:22:58 GMT
- Title: XTQA: Span-Level Explanations of the Textbook Question Answering
- Authors: Jie Ma, Qi Chai, Jun Liu, Qingyu Yin, Pinghui Wang, Qinghua Zheng
- Abstract summary: Textbook Question Answering (TQA) is a task that one should answer a diagram/non-diagram question given a large multi-modal context.
We propose a novel architecture towards span-level eXplanations of the TQA based on our proposed coarse-to-fine grained algorithm.
Experimental results show that XTQA significantly improves the state-of-the-art performance compared with baselines.
- Score: 32.67922842489546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Textbook Question Answering (TQA) is a task that one should answer a
diagram/non-diagram question given a large multi-modal context consisting of
abundant essays and diagrams. We argue that the explainability of this task
should place students as a key aspect to be considered. To address this issue,
we devise a novel architecture towards span-level eXplanations of the TQA
(XTQA) based on our proposed coarse-to-fine grained algorithm, which can
provide not only the answers but also the span-level evidences to choose them
for students. This algorithm first coarsely chooses top $M$ paragraphs relevant
to questions using the TF-IDF method, and then chooses top $K$ evidence spans
finely from all candidate spans within these paragraphs by computing the
information gain of each span to questions. Experimental results shows that
XTQA significantly improves the state-of-the-art performance compared with
baselines. The source code is available at
https://github.com/keep-smile-001/opentqa
Related papers
- Harnessing the Power of Prompt-based Techniques for Generating
School-Level Questions using Large Language Models [0.5459032912385802]
We propose a novel approach that utilizes prompt-based techniques to generate descriptive and reasoning-based questions.
We curate a new QG dataset called EduProbe for school-level subjects, by leveraging the rich content of NCERT textbooks.
We investigate several prompt-based QG methods by fine-tuning transformer-based large language models.
arXiv Detail & Related papers (2023-12-02T05:13:28Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Answer Candidate Type Selection: Text-to-Text Language Model for Closed
Book Question Answering Meets Knowledge Graphs [62.20354845651949]
We present a novel approach which works on top of the pre-trained Text-to-Text QA system to address this issue.
Our simple yet effective method performs filtering and re-ranking of generated candidates based on their types derived from Wikidata "instance_of" property.
arXiv Detail & Related papers (2023-10-10T20:49:43Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - UNIQORN: Unified Question Answering over RDF Knowledge Graphs and Natural Language Text [20.1784368017206]
Question answering over RDF data like knowledge graphs has been greatly advanced.
IR and NLP communities have addressed QA over text, but such systems barely utilize semantic data and knowledge.
This paper presents a method for complex questions that can seamlessly operate over a mixture of RDF datasets and text corpora.
arXiv Detail & Related papers (2021-08-19T10:50:52Z) - ComQA:Compositional Question Answering via Hierarchical Graph Neural
Networks [47.12013005600986]
We present a large-scale compositional question answering dataset containing more than 120k human-labeled questions.
To tackle the ComQA problem, we proposed a hierarchical graph neural networks, which represents the document from the low-level word to the high-level sentence.
Our proposed model achieves a significant improvement over previous machine reading comprehension methods and pre-training methods.
arXiv Detail & Related papers (2021-01-16T08:23:27Z) - TSQA: Tabular Scenario Based Question Answering [14.92495213480887]
scenario-based question answering (SQA) has attracted an increasing research interest.
To support the study of this task, we construct GeoTSQA.
We extend state-of-the-art MRC methods with TTGen, a novel table-to-text generator.
arXiv Detail & Related papers (2021-01-14T02:00:33Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - Semantic Graphs for Generating Deep Questions [98.5161888878238]
We propose a novel framework which first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN)
On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance.
arXiv Detail & Related papers (2020-04-27T10:52:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.