Assertion Enhanced Few-Shot Learning: Instructive Technique for Large
Language Models to Generate Educational Explanations
- URL: http://arxiv.org/abs/2312.03122v3
- Date: Sat, 20 Jan 2024 15:02:20 GMT
- Title: Assertion Enhanced Few-Shot Learning: Instructive Technique for Large
Language Models to Generate Educational Explanations
- Authors: Tasmia Shahriar, Kelly Ramos and Noboru Matsuda
- Abstract summary: Human educators possess an intrinsic ability to anticipate and seek educational explanations from students.
We aim to imbue Intelligent Tutoring Systems with this ability using few-shot learning capability of Large Language Models.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Human educators possess an intrinsic ability to anticipate and seek
educational explanations from students, which drives them to pose
thought-provoking questions when students cannot articulate these explanations
independently. We aim to imbue Intelligent Tutoring Systems with this ability
using few-shot learning capability of Large Language Models. Our work proposes
a novel prompting technique, Assertion Enhanced Few-Shot Learning, to
facilitate the generation of accurate, detailed oriented educational
explanations. Our central hypothesis is that, in educational domain, few-shot
demonstrations are necessary but not a sufficient condition for quality
explanation generation. We conducted a study involving 12 in-service teachers,
comparing our approach to Traditional Few-Shot Learning. The results show that
Assertion Enhanced Few-Shot Learning improves explanation accuracy by 15% and
yields higher-quality explanations, as evaluated by teachers. We also conduct a
qualitative ablation study to factor the impact of assertions to provide
educator-friendly prompting guidelines for generating explanations in their
domain of interest.
Related papers
- Explainable Few-shot Knowledge Tracing [48.877979333221326]
We propose a cognition-guided framework that can track the student knowledge from a few student records while providing natural language explanations.
Experimental results from three widely used datasets show that LLMs can perform comparable or superior to competitive deep knowledge tracing methods.
arXiv Detail & Related papers (2024-05-23T10:07:21Z) - Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions [34.760230622675365]
Intelligent tutoring systems (ITSs) imitate human tutors and aim to provide customized instructions or feedback to learners.
With the emergence of generative artificial intelligence, large language models (LLMs) entitle the systems to complex and coherent conversational interactions.
We investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning.
arXiv Detail & Related papers (2024-04-04T13:22:28Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Metacognition-Enhanced Few-Shot Prompting With Positive Reinforcement [17.120733859844076]
We propose a novel metacognition-enhanced few-shot prompting, which guides large language models to reflect on their thought processes.
We introduce positive reinforcement into our metacognition-enhanced few-shot prompting to promote the few-shot learning of large language models.
arXiv Detail & Related papers (2023-12-14T03:49:52Z) - Exploring Iterative Enhancement for Improving Learnersourced
Multiple-Choice Question Explanations with Large Language Models [23.12128710240935]
We present and evaluate a framework called "ILearner-LLM" to scaffold the task of automated explanation generation.
The framework generates high-quality student-aligned explanations by iteratively feeding the quality rating score from the evaluation model back into the instruction prompt.
Our findings represent a promising path to enrich the learnersourcing experience for students.
arXiv Detail & Related papers (2023-09-19T09:04:15Z) - Can Language Models Teach Weaker Agents? Teacher Explanations Improve
Students via Personalization [84.86241161706911]
We show that teacher LLMs can indeed intervene on student reasoning to improve their performance.
We also demonstrate that in multi-turn interactions, teacher explanations generalize and learn from explained data.
We verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
arXiv Detail & Related papers (2023-06-15T17:27:20Z) - Reinforcement Learning Tutor Better Supported Lower Performers in a Math
Task [32.6507926764587]
Reinforcement learning could be a key tool to reduce the development cost and improve the effectiveness of intelligent tutoring software.
We show that deep reinforcement learning can be used to provide adaptive pedagogical support to students learning about the concept of volume.
arXiv Detail & Related papers (2023-04-11T02:11:24Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Explanations from Large Language Models Make Small Reasoners Better [61.991772773700006]
We show that our method can consistently and significantly outperform finetuning baselines across different settings.
As a side benefit, human evaluation shows that our method can generate high-quality explanations to justify its predictions.
arXiv Detail & Related papers (2022-10-13T04:50:02Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.