LearnLens: LLM-Enabled Personalised, Curriculum-Grounded Feedback with Educators in the Loop
- URL: http://arxiv.org/abs/2507.04295v3
- Date: Fri, 18 Jul 2025 11:37:12 GMT
- Title: LearnLens: LLM-Enabled Personalised, Curriculum-Grounded Feedback with Educators in the Loop
- Authors: Runcong Zhao, Artem Bobrov, Jiazheng Li, Yulan He,
- Abstract summary: LearnLens is a modular, LLM-based system that generates personalised, curriculum-aligned feedback in science education.<n>It addresses key challenges in existing systems, offering scalable, high-quality feedback that empowers both teachers and students.
- Score: 14.41622209349575
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Effective feedback is essential for student learning but is time-intensive for teachers. We present LearnLens, a modular, LLM-based system that generates personalised, curriculum-aligned feedback in science education. LearnLens comprises three components: (1) an error-aware assessment module that captures nuanced reasoning errors; (2) a curriculum-grounded generation module that uses a structured, topic-linked memory chain rather than traditional similarity-based retrieval, improving relevance and reducing noise; and (3) an educator-in-the-loop interface for customisation and oversight. LearnLens addresses key challenges in existing systems, offering scalable, high-quality feedback that empowers both teachers and students.
Related papers
- Automated Feedback on Student-Generated UML and ER Diagrams Using Large Language Models [39.58317527488534]
We introduce DUET (Diamatic & ER Tutor), a prototype of an LLM-based tool.<n>It converts a reference diagram and a student-submitted diagram into a textual representation and provides structured feedback based on the differences.<n>It uses a multi-stage LLM pipeline to compare diagrams and generate reflective feedback.<n>It enables analytical insights for educators, aiming to foster self-directed learning and inform instructional strategies.
arXiv Detail & Related papers (2025-07-31T11:49:01Z) - Partnering with AI: A Pedagogical Feedback System for LLM Integration into Programming Education [19.441958600393342]
This paper introduces a novel framework for large language models (LLMs)-driven feedback generation.<n>Our findings suggest that teachers consider that, when aligned with the framework, LLMs can effectively support students.<n>However, we found several limitations, such as its inability to adapt feedback to dynamic classroom contexts.
arXiv Detail & Related papers (2025-07-01T03:48:48Z) - Learning-Time Encoding Shapes Unlearning in LLMs [25.48677069802298]
We investigate how learning-time choices in knowledge encoding impact the effectiveness of unlearning factual knowledge.<n>Our results suggest that learning-time knowledge encoding may play a central role in enabling reliable post-hoc unlearning.
arXiv Detail & Related papers (2025-06-18T02:42:02Z) - LLM-Driven Personalized Answer Generation and Evaluation [0.0]
Personalization, adapted to the needs of individual learners, is crucial for enhancing the learning experience.<n>This paper explores the potential of Large Language Models (LLMs) to generate personalized answers to learners' questions.
arXiv Detail & Related papers (2025-06-12T15:46:15Z) - From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with Pedagogy using Reinforcement Learning [76.09281171131941]
Large language models (LLMs) can transform education, but their optimization for direct question-answering often undermines effective pedagogy.<n>We propose an online reinforcement learning (RL)-based alignment framework that can quickly adapt LLMs into effective tutors.
arXiv Detail & Related papers (2025-05-21T15:00:07Z) - Can Large Language Models Match Tutoring System Adaptivity? A Benchmarking Study [0.0]
Large Language Models (LLMs) hold promise as dynamic instructional aids.<n>Yet, it remains unclear whether LLMs can replicate the adaptivity of intelligent tutoring systems (ITS)
arXiv Detail & Related papers (2025-04-07T23:57:32Z) - Position: LLMs Can be Good Tutors in Foreign Language Education [87.88557755407815]
We argue that large language models (LLMs) have the potential to serve as effective tutors in foreign language education (FLE)<n> Specifically, LLMs can play three critical roles: (1) as data enhancers, improving the creation of learning materials or serving as student simulations; (2) as task predictors, serving as learner assessment or optimizing learning pathway; and (3) as agents, enabling personalized and inclusive education.
arXiv Detail & Related papers (2025-02-08T06:48:49Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.<n>A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.<n>A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.<n>An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - CourseAssist: Pedagogically Appropriate AI Tutor for Computer Science Education [1.052788652996288]
This poster introduces CourseAssist, a novel LLM-based tutoring system tailored for computer science education.
Unlike generic LLM systems, CourseAssist uses retrieval-augmented generation, user intent classification, and question decomposition to align AI responses with specific course materials and learning objectives.
arXiv Detail & Related papers (2024-05-01T20:43:06Z) - Improving the Validity of Automatically Generated Feedback via Reinforcement Learning [46.667783153759636]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)<n>Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning [63.63840740526497]
We investigate how instruction tuning adjusts pre-trained models with a focus on intrinsic changes.
The impact of instruction tuning is then studied by comparing the explanations derived from the pre-trained and instruction-tuned models.
Our findings reveal three significant impacts of instruction tuning.
arXiv Detail & Related papers (2023-09-30T21:16:05Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.