ChatPRCS: A Personalized Support System for English Reading
Comprehension based on ChatGPT
- URL: http://arxiv.org/abs/2309.12808v2
- Date: Mon, 25 Sep 2023 11:01:16 GMT
- Title: ChatPRCS: A Personalized Support System for English Reading
Comprehension based on ChatGPT
- Authors: Xizhe Wang, Yihua Zhong, Changqin Huang, and Xiaodi Huang
- Abstract summary: This paper presents a novel personalized support system for reading comprehension, referred to as ChatPRCS.
ChatPRCS employs methods including reading comprehension proficiency prediction, question generation, and automatic evaluation.
- Score: 3.847982502219679
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a common approach to learning English, reading comprehension primarily
entails reading articles and answering related questions. However, the
complexity of designing effective exercises results in students encountering
standardized questions, making it challenging to align with individualized
learners' reading comprehension ability. By leveraging the advanced
capabilities offered by large language models, exemplified by ChatGPT, this
paper presents a novel personalized support system for reading comprehension,
referred to as ChatPRCS, based on the Zone of Proximal Development theory.
ChatPRCS employs methods including reading comprehension proficiency
prediction, question generation, and automatic evaluation, among others, to
enhance reading comprehension instruction. First, we develop a new algorithm
that can predict learners' reading comprehension abilities using their
historical data as the foundation for generating questions at an appropriate
level of difficulty. Second, a series of new ChatGPT prompt patterns is
proposed to address two key aspects of reading comprehension objectives:
question generation, and automated evaluation. These patterns further improve
the quality of generated questions. Finally, by integrating personalized
ability and reading comprehension prompt patterns, ChatPRCS is systematically
validated through experiments. Empirical results demonstrate that it provides
learners with high-quality reading comprehension questions that are broadly
aligned with expert-crafted questions at a statistical level.
Related papers
- How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Analyzing Multiple-Choice Reading and Listening Comprehension Tests [0.0]
This work investigates how much of a contextual passage needs to be read in multiple-choice reading based on conversation transcriptions and listening comprehension tests to be able to work out the correct answer.
We find that automated reading comprehension systems can perform significantly better than random with partial or even no access to the context passage.
arXiv Detail & Related papers (2023-07-03T14:55:02Z) - Improving Reading Comprehension Question Generation with Data
Augmentation and Overgenerate-and-rank [3.854023945160742]
Automated answer-aware reading comprehension question generation has significant potential to scale up learner support in educational activities.
One key technical challenge in this setting is that there can be multiple questions, sometimes very different from each other, with the same answer.
We propose 1) a data augmentation method that enriches the training dataset with diverse questions given the same context and answer and 2) an overgenerate-and-rank method to select the best question from a pool of candidates.
arXiv Detail & Related papers (2023-06-15T04:23:25Z) - Elaborative Simplification as Implicit Questions Under Discussion [51.17933943734872]
This paper proposes to view elaborative simplification through the lens of the Question Under Discussion (QUD) framework.
We show that explicitly modeling QUD provides essential understanding of elaborative simplification and how the elaborations connect with the rest of the discourse.
arXiv Detail & Related papers (2023-05-17T17:26:16Z) - SkillQG: Learning to Generate Question for Reading Comprehension
Assessment [54.48031346496593]
We present a question generation framework with controllable comprehension types for assessing and improving machine reading comprehension models.
We first frame the comprehension type of questions based on a hierarchical skill-based schema, then formulate $textttSkillQG$ as a skill-conditioned question generator.
Empirical results demonstrate that $textttSkillQG$ outperforms baselines in terms of quality, relevance, and skill-controllability.
arXiv Detail & Related papers (2023-05-08T14:40:48Z) - ChatGPT in the Classroom: An Analysis of Its Strengths and Weaknesses
for Solving Undergraduate Computer Science Questions [5.962828109329824]
ChatGPT is an AI language model developed by OpenAI that can understand and generate human-like text.
There is concern that students may leverage ChatGPT to complete take-home assignments and exams and obtain favorable grades without genuinely acquiring knowledge.
arXiv Detail & Related papers (2023-04-28T17:26:32Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - Question Generation for Reading Comprehension Assessment by Modeling How
and What to Ask [3.470121495099]
We study Question Generation (QG) for reading comprehension where inferential questions are critical.
We propose a two-step model (HTA-WTA) that takes advantage of previous datasets.
We show that the HTA-WTA model tests for strong SCRS by asking deep inferential questions.
arXiv Detail & Related papers (2022-04-06T15:52:24Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.