Beyond Flesch-Kincaid: Prompt-based Metrics Improve Difficulty Classification of Educational Texts
- URL: http://arxiv.org/abs/2405.09482v2
- Date: Thu, 6 Jun 2024 07:40:58 GMT
- Title: Beyond Flesch-Kincaid: Prompt-based Metrics Improve Difficulty Classification of Educational Texts
- Authors: Donya Rooein, Paul Rottger, Anastassia Shaitarova, Dirk Hovy,
- Abstract summary: Using large language models (LLMs) for educational applications like dialogue-based teaching is a hot topic.
Current Static metrics for text difficulty, like the Flesch-Kincaid Reading Ease score, are known to be crude and brittle.
We introduce and evaluate a new set of Prompt-based metrics for text difficulty.
- Score: 20.933548500888595
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Using large language models (LLMs) for educational applications like dialogue-based teaching is a hot topic. Effective teaching, however, requires teachers to adapt the difficulty of content and explanations to the education level of their students. Even the best LLMs today struggle to do this well. If we want to improve LLMs on this adaptation task, we need to be able to measure adaptation success reliably. However, current Static metrics for text difficulty, like the Flesch-Kincaid Reading Ease score, are known to be crude and brittle. We, therefore, introduce and evaluate a new set of Prompt-based metrics for text difficulty. Based on a user study, we create Prompt-based metrics as inputs for LLMs. They leverage LLM's general language understanding capabilities to capture more abstract and complex features than Static metrics. Regression experiments show that adding our Prompt-based metrics significantly improves text difficulty classification over Static metrics alone. Our results demonstrate the promise of using LLMs to evaluate text adaptation to different education levels.
Related papers
- Soft Prompting for Unlearning in Large Language Models [11.504012974208466]
This work focuses on investigating machine unlearning for Large Language Models motivated by data protection regulations.
We propose a framework textbfSoft textbfPrompting for textbfUntextbflearning (SPUL) that learns prompt tokens that can be appended to an arbitrary query to induce unlearning.
arXiv Detail & Related papers (2024-06-17T19:11:40Z) - RepEval: Effective Text Evaluation with LLM Representation [54.07909112633993]
We introduce RepEval, the first metric leveraging the projection of LLM representations for evaluation.
RepEval requires minimal sample pairs for training, and through simple prompt modifications, it can easily transition to various tasks.
Results on ten datasets from three tasks demonstrate the high effectiveness of our method.
arXiv Detail & Related papers (2024-04-30T13:50:55Z) - Do LLMs Implicitly Determine the Suitable Text Difficulty for Users? [29.6000895693808]
We show that large language models can implicitly handle text difficulty between user input and generated text.
Some LLMs can surpass humans in handling text difficulty and the importance of instruction-tuning.
arXiv Detail & Related papers (2024-02-22T11:16:23Z) - When LLMs Meet Cunning Texts: A Fallacy Understanding Benchmark for Large Language Models [59.84769254832941]
We propose a FaLlacy Understanding Benchmark (FLUB) containing cunning texts that are easy for humans to understand but difficult for models to grasp.
Specifically, the cunning texts that FLUB focuses on mainly consist of the tricky, humorous, and misleading texts collected from the real internet environment.
Based on FLUB, we investigate the performance of multiple representative and advanced LLMs.
arXiv Detail & Related papers (2024-02-16T22:12:53Z) - Think from Words(TFW): Initiating Human-Like Cognition in Large Language
Models Through Think from Words for Japanese Text-level Classification [0.0]
"Think from Words" (TFW) initiates the comprehension process at the word level and then extends it to encompass the entire text.
"TFW with Extra word-level information" (TFW Extra) augmenting comprehension with additional word-level data.
Our findings shed light on the impact of various word-level information types on LLMs' text comprehension.
arXiv Detail & Related papers (2023-12-06T12:34:46Z) - AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations [52.43593893122206]
AlignedCoT is an in-context learning technique for invoking Large Language Models.
It achieves consistent and correct step-wise prompts in zero-shot scenarios.
We conduct experiments on mathematical reasoning and commonsense reasoning.
arXiv Detail & Related papers (2023-11-22T17:24:21Z) - Improving Factual Consistency of Text Summarization by Adversarially
Decoupling Comprehension and Embellishment Abilities of LLMs [67.56087611675606]
Large language models (LLMs) generate summaries that are factually inconsistent with original articles.
These hallucinations are challenging to detect through traditional methods.
We propose an adversarially DEcoupling method to disentangle the abilities of LLMs (DECENT)
arXiv Detail & Related papers (2023-10-30T08:40:16Z) - TRACE: A Comprehensive Benchmark for Continual Learning in Large
Language Models [52.734140807634624]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs.
We introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
arXiv Detail & Related papers (2023-10-10T16:38:49Z) - Eva-KELLM: A New Benchmark for Evaluating Knowledge Editing of LLMs [54.22416829200613]
Eva-KELLM is a new benchmark for evaluating knowledge editing of large language models.
Experimental results indicate that the current methods for knowledge editing using raw documents are not effective in yielding satisfactory results.
arXiv Detail & Related papers (2023-08-19T09:17:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.