Analyzing Multiple-Choice Reading and Listening Comprehension Tests
- URL: http://arxiv.org/abs/2307.01076v1
- Date: Mon, 3 Jul 2023 14:55:02 GMT
- Title: Analyzing Multiple-Choice Reading and Listening Comprehension Tests
- Authors: Vatsal Raina, Adian Liusie, Mark Gales
- Abstract summary: This work investigates how much of a contextual passage needs to be read in multiple-choice reading based on conversation transcriptions and listening comprehension tests to be able to work out the correct answer.
We find that automated reading comprehension systems can perform significantly better than random with partial or even no access to the context passage.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiple-choice reading and listening comprehension tests are an important
part of language assessment. Content creators for standard educational tests
need to carefully curate questions that assess the comprehension abilities of
candidates taking the tests. However, recent work has shown that a large number
of questions in general multiple-choice reading comprehension datasets can be
answered without comprehension, by leveraging world knowledge instead. This
work investigates how much of a contextual passage needs to be read in
multiple-choice reading based on conversation transcriptions and listening
comprehension tests to be able to work out the correct answer. We find that
automated reading comprehension systems can perform significantly better than
random with partial or even no access to the context passage. These findings
offer an approach for content creators to automatically capture the trade-off
between comprehension and world knowledge required for their proposed
questions.
Related papers
- How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Assessing Distractors in Multiple-Choice Tests [10.179963650540056]
We propose metrics for the quality of distractors in multiple-choice reading comprehension tests.
Specifically, we define quality in terms of the incorrectness, plausibility and diversity of the distractor options.
arXiv Detail & Related papers (2023-11-08T09:37:09Z) - ChatPRCS: A Personalized Support System for English Reading
Comprehension based on ChatGPT [3.847982502219679]
This paper presents a novel personalized support system for reading comprehension, referred to as ChatPRCS.
ChatPRCS employs methods including reading comprehension proficiency prediction, question generation, and automatic evaluation.
arXiv Detail & Related papers (2023-09-22T11:46:44Z) - Question Generation for Reading Comprehension Assessment by Modeling How
and What to Ask [3.470121495099]
We study Question Generation (QG) for reading comprehension where inferential questions are critical.
We propose a two-step model (HTA-WTA) that takes advantage of previous datasets.
We show that the HTA-WTA model tests for strong SCRS by asking deep inferential questions.
arXiv Detail & Related papers (2022-04-06T15:52:24Z) - Multi Document Reading Comprehension [0.0]
Reading (RC) is a task of answering a question from a given passage or a set of passages.
Recent trials and experiments in the field of Natural Language Processing (NLP) have proved that machines can be provided with the ability to process the text in the passage.
arXiv Detail & Related papers (2022-01-05T16:54:48Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - STARC: Structured Annotations for Reading Comprehension [23.153841344989143]
We present STARC, a new annotation framework for assessing reading comprehension with multiple choice questions.
The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English.
arXiv Detail & Related papers (2020-04-30T14:08:50Z) - Knowledgeable Dialogue Reading Comprehension on Key Turns [84.1784903043884]
Multi-choice machine reading comprehension (MRC) requires models to choose the correct answer from candidate options given a passage and a question.
Our research focuses dialogue-based MRC, where the passages are multi-turn dialogues.
It suffers from two challenges, the answer selection decision is made without support of latently helpful commonsense, and the multi-turn context may hide considerable irrelevant information.
arXiv Detail & Related papers (2020-04-29T07:04:43Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.