Digital Comprehensibility Assessment of Simplified Texts among Persons
with Intellectual Disabilities
- URL: http://arxiv.org/abs/2402.13094v1
- Date: Tue, 20 Feb 2024 15:37:08 GMT
- Title: Digital Comprehensibility Assessment of Simplified Texts among Persons
with Intellectual Disabilities
- Authors: Andreas S\"auberli, Franz Holzknecht, Patrick Haller, Silvana Deilen,
Laura Schiffl, Silvia Hansen-Schirra, Sarah Ebling
- Abstract summary: We conducted an evaluation study of text comprehensibility including participants with and without intellectual disabilities reading German texts on a tablet computer.
We explored four different approaches to measuring comprehensibility: multiple-choice comprehension questions, perceived difficulty ratings, response time, and reading speed.
For the target group of persons with intellectual disabilities, comprehension questions emerged as the most reliable measure, while analyzing reading speed provided valuable insights into participants' reading behavior.
- Score: 2.446971913303003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text simplification refers to the process of increasing the comprehensibility
of texts. Automatic text simplification models are most commonly evaluated by
experts or crowdworkers instead of the primary target groups of simplified
texts, such as persons with intellectual disabilities. We conducted an
evaluation study of text comprehensibility including participants with and
without intellectual disabilities reading unsimplified, automatically and
manually simplified German texts on a tablet computer. We explored four
different approaches to measuring comprehensibility: multiple-choice
comprehension questions, perceived difficulty ratings, response time, and
reading speed. The results revealed significant variations in these
measurements, depending on the reader group and whether the text had undergone
automatic or manual simplification. For the target group of persons with
intellectual disabilities, comprehension questions emerged as the most reliable
measure, while analyzing reading speed provided valuable insights into
participants' reading behavior.
Related papers
- Evaluating LLMs for Targeted Concept Simplification for Domain-Specific Texts [53.421616210871704]
Lack of context and unfamiliarity with difficult concepts is a major reason for adult readers' difficulty with domain-specific text.
We introduce "targeted concept simplification," a simplification task for rewriting text to help readers comprehend text containing unfamiliar concepts.
We benchmark the performance of open-source and commercial LLMs and a simple dictionary baseline on this task.
arXiv Detail & Related papers (2024-10-28T05:56:51Z) - Exploring Large Language Models to generate Easy to Read content [1.474723404975345]
Easy to Read and Plain Language guidelines aim to simplify complex texts.
Standardizing these guidelines remains challenging and often involves manual processes.
This work presents an exploratory investigation into leveraging Artificial Intelligence (AI) and Natural Language Processing (NLP) approaches to systematically simplify Spanish texts into Easy to Read formats.
arXiv Detail & Related papers (2024-07-29T14:30:39Z) - Automating Easy Read Text Segmentation [2.7309692684728617]
Easy Read text is one of the main forms of access to information for people with reading difficulties.
One of the key characteristics of this type of text is the requirement to split sentences into smaller grammatical segments.
We study novel methods for the task, leveraging masked and generative language models, along with constituent parsing.
arXiv Detail & Related papers (2024-06-17T12:25:25Z) - InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification [60.10193972862099]
This work proposes a framework to characterize and recover simplification-induced information loss in form of question-and-answer pairs.
QA pairs are designed to help readers deepen their knowledge of a text.
arXiv Detail & Related papers (2024-01-29T19:00:01Z) - Generating Summaries with Controllable Readability Levels [67.34087272813821]
Several factors affect the readability level, such as the complexity of the text, its subject matter, and the reader's background knowledge.
Current text generation approaches lack refined control, resulting in texts that are not customized to readers' proficiency levels.
We develop three text generation techniques for controlling readability: instruction-based readability control, reinforcement learning to minimize the gap between requested and observed readability, and a decoding approach that uses look-ahead to estimate the readability of upcoming decoding steps.
arXiv Detail & Related papers (2023-10-16T17:46:26Z) - LC-Score: Reference-less estimation of Text Comprehension Difficulty [0.0]
We present textscLC-Score, a simple approach for training text comprehension metric for any French text without reference.
Our objective is to quantitatively capture the extend to which a text suits to the textitLangage Clair (LC, textitClear Language) guidelines.
We explore two approaches: (i) using linguistically motivated indicators used to train statistical models, and (ii) neural learning directly from text leveraging pre-trained language models.
arXiv Detail & Related papers (2023-10-04T11:49:37Z) - Gaze-Driven Sentence Simplification for Language Learners: Enhancing
Comprehension and Readability [11.50011780498048]
This paper presents a novel gaze-driven sentence simplification system designed to enhance reading comprehension.
Our system incorporates machine learning models tailored to individual learners, combining eye gaze features and linguistic features to assess sentence comprehension.
arXiv Detail & Related papers (2023-09-30T12:18:31Z) - ChatPRCS: A Personalized Support System for English Reading
Comprehension based on ChatGPT [3.847982502219679]
This paper presents a novel personalized support system for reading comprehension, referred to as ChatPRCS.
ChatPRCS employs methods including reading comprehension proficiency prediction, question generation, and automatic evaluation.
arXiv Detail & Related papers (2023-09-22T11:46:44Z) - PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts [76.18347405302728]
This study uses a plethora of adversarial textual attacks targeting prompts across multiple levels: character, word, sentence, and semantic.
The adversarial prompts are then employed in diverse tasks including sentiment analysis, natural language inference, reading comprehension, machine translation, and math problem-solving.
Our findings demonstrate that contemporary Large Language Models are not robust to adversarial prompts.
arXiv Detail & Related papers (2023-06-07T15:37:00Z) - Elaborative Simplification as Implicit Questions Under Discussion [51.17933943734872]
This paper proposes to view elaborative simplification through the lens of the Question Under Discussion (QUD) framework.
We show that explicitly modeling QUD provides essential understanding of elaborative simplification and how the elaborations connect with the rest of the discourse.
arXiv Detail & Related papers (2023-05-17T17:26:16Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.