One-Topic-Doesn't-Fit-All: Transcreating Reading Comprehension Test for Personalized Learning
- URL: http://arxiv.org/abs/2511.09135v1
- Date: Thu, 13 Nov 2025 01:34:44 GMT
- Title: One-Topic-Doesn't-Fit-All: Transcreating Reading Comprehension Test for Personalized Learning
- Authors: Jieun Han, Daniel Lee, Haneul Yoo, Jinsung Yoon, Junyeong Park, Suin Kim, So-Yeon Ahn, Alice Oh,
- Abstract summary: We propose a novel approach to generating personalized English reading comprehension tests tailored to students' interests.<n>We generate new passages and multiple-choice reading comprehension questions that are linguistically similar to the original passages but semantically aligned with individual learners' interests.<n>Our results show students learning with personalized reading passages demonstrate improved comprehension and motivation retention compared to those learning with non-personalized materials.
- Score: 39.357397697061664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized learning has gained attention in English as a Foreign Language (EFL) education, where engagement and motivation play crucial roles in reading comprehension. We propose a novel approach to generating personalized English reading comprehension tests tailored to students' interests. We develop a structured content transcreation pipeline using OpenAI's gpt-4o, where we start with the RACE-C dataset, and generate new passages and multiple-choice reading comprehension questions that are linguistically similar to the original passages but semantically aligned with individual learners' interests. Our methodology integrates topic extraction, question classification based on Bloom's taxonomy, linguistic feature analysis, and content transcreation to enhance student engagement. We conduct a controlled experiment with EFL learners in South Korea to examine the impact of interest-aligned reading materials on comprehension and motivation. Our results show students learning with personalized reading passages demonstrate improved comprehension and motivation retention compared to those learning with non-personalized materials.
Related papers
- IntrEx: A Dataset for Modeling Engagement in Educational Conversations [7.526860155587907]
IntrEx is the first large dataset annotated for interestingness and expected interestingness in teacher-student interactions.<n>We employ a rigorous annotation process with over 100 second-language learners.<n>We investigate whether large language models (LLMs) can predict human interestingness judgments.
arXiv Detail & Related papers (2025-09-08T13:07:35Z) - Question Generation for Assessing Early Literacy Reading Comprehension [7.209603871896803]
We propose a novel approach for generating comprehension questions geared to K-2 English learners.<n>Our method ensures complete coverage of the underlying material and adaptation to the learner's specific proficiencies.
arXiv Detail & Related papers (2025-07-30T06:27:02Z) - How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Decomposed Prompting: Probing Multilingual Linguistic Structure Knowledge in Large Language Models [54.58989938395976]
We introduce a decomposed prompting approach for sequence labeling tasks.<n>We test our method on the Universal Dependencies part-of-speech tagging dataset for 38 languages.
arXiv Detail & Related papers (2024-02-28T15:15:39Z) - Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning [66.79173000135717]
We apply this work to teaching two Indian languages, Kannada and Marathi, which do not have well-developed resources for second language learning.
We extract descriptions from a natural text corpus that answer questions about morphosyntax (learning of word order, agreement, case marking, or word formation) and semantics (learning of vocabulary).
We enlist the help of language educators from schools in North America to perform a manual evaluation, who find the materials have potential to be used for their lesson preparation and learner evaluation.
arXiv Detail & Related papers (2023-10-27T18:17:29Z) - Gaze-Driven Sentence Simplification for Language Learners: Enhancing
Comprehension and Readability [11.50011780498048]
This paper presents a novel gaze-driven sentence simplification system designed to enhance reading comprehension.
Our system incorporates machine learning models tailored to individual learners, combining eye gaze features and linguistic features to assess sentence comprehension.
arXiv Detail & Related papers (2023-09-30T12:18:31Z) - Large Language Models for Difficulty Estimation of Foreign Language
Content with Application to Language Learning [1.4392208044851977]
We use large language models to aid learners enhance proficiency in a foreign language.
Our work centers on French content, but our approach is readily transferable to other languages.
arXiv Detail & Related papers (2023-09-10T21:23:09Z) - Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning [91.49622922938681]
We present an automatic framework that automatically discovers and visualizing descriptions of different aspects of grammar.
Specifically, we extract descriptions from a natural text corpus that answer questions about morphosyntax and semantics.
We apply this method for teaching the Indian languages, Kannada and Marathi, which, unlike English, do not have well-developed pedagogical resources.
arXiv Detail & Related papers (2022-06-10T14:52:22Z) - A Framework for Learning Assessment through Multimodal Analysis of
Reading Behaviour and Language Comprehension [0.0]
This dissertation shows how different skills could be measured and scored automatically.
We also demonstrate, using example experiments on multiple forms of learners' responses, how frequent reading practices could impact on the variables of multimodal skills.
arXiv Detail & Related papers (2021-10-22T17:48:03Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.