A Survey of LLM-Based Applications in Programming Education: Balancing Automation and Human Oversight
- URL: http://arxiv.org/abs/2510.03719v1
- Date: Sat, 04 Oct 2025 07:46:20 GMT
- Title: A Survey of LLM-Based Applications in Programming Education: Balancing Automation and Human Oversight
- Authors: Griffin Pitts, Anurata Prabha Hridi, Arun-Balajiee Lekshmi-Narayanan,
- Abstract summary: This survey synthesizes recent work on large language models (LLMs) applications in programming education across three focal areas: formative code feedback, assessment, and knowledge modeling.<n>We identify recurring design patterns in how these tools are applied and find that interventions are most effective when educator expertise complements model output through human-in-the-loop oversight, scaffolding, and evaluation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Novice programmers benefit from timely, personalized support that addresses individual learning gaps, yet the availability of instructors and teaching assistants is inherently limited. Large language models (LLMs) present opportunities to scale such support, though their effectiveness depends on how well technical capabilities are aligned with pedagogical goals. This survey synthesizes recent work on LLM applications in programming education across three focal areas: formative code feedback, assessment, and knowledge modeling. We identify recurring design patterns in how these tools are applied and find that interventions are most effective when educator expertise complements model output through human-in-the-loop oversight, scaffolding, and evaluation. Fully automated approaches are often constrained in capturing the pedagogical nuances of programming education, although human-in-the-loop designs and course specific adaptation offer promising directions for future improvement. Future research should focus on improving transparency, strengthening alignment with pedagogy, and developing systems that flexibly adapt to the needs of varied learning contexts.
Related papers
- Multi-Agent Learning Path Planning via LLMs [10.288666777827578]
This study proposes a novel Multi-Agent Learning Path Planning framework powered by large language models (LLMs)<n>The framework includes three task-specific agents: a learner analytics agent, a path planning agent, and a reflection agent.<n> Experiments conducted on the MOOCX dataset using seven LLMs show that MALPP significantly outperforms baseline models in path quality, knowledge sequence consistency, and cognitive load alignment.
arXiv Detail & Related papers (2026-01-24T07:13:08Z) - Using LLMs and Essence to Support Software Practice Adoption [0.3609538870261841]
This study explores the integration of Essence, a standard and thinking framework for managing software engineering practices, with large language models (LLMs)<n>The proposed system consistently outperforms its baseline counterpart in domain-specific tasks.
arXiv Detail & Related papers (2025-08-22T14:59:35Z) - Partnering with AI: A Pedagogical Feedback System for LLM Integration into Programming Education [21.37197118335639]
This paper introduces a novel framework for large language models (LLMs)-driven feedback generation.<n>Our findings suggest that teachers consider that, when aligned with the framework, LLMs can effectively support students.<n>However, we found several limitations, such as its inability to adapt feedback to dynamic classroom contexts.
arXiv Detail & Related papers (2025-07-01T03:48:48Z) - Design of AI-Powered Tool for Self-Regulation Support in Programming Education [4.171227316909729]
Large Language Model (LLM) tools have demonstrated their potential to deliver high-quality assistance.<n>However, many of these tools operate independently from institutional Learning Management Systems.<n>This isolation limits the ability to leverage learning materials and exercise context for generating tailored, context-aware feedback.
arXiv Detail & Related papers (2025-04-03T22:47:33Z) - LLM Post-Training: A Deep Dive into Reasoning Large Language Models [131.10969986056]
Large Language Models (LLMs) have transformed the natural language processing landscape and brought to life diverse applications.<n>Post-training methods enable LLMs to refine their knowledge, improve reasoning, enhance factual accuracy, and align more effectively with user intents and ethical considerations.
arXiv Detail & Related papers (2025-02-28T18:59:54Z) - LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.<n>It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.<n>GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, a framework for better data construction and model tuning.<n>For insufficient data usage, we incorporate strategies such as Chain-of-Thought prompting and anti-induction.<n>For rigid behavior patterns, we design the tuning process and introduce automated DPO to enhance the specificity and dynamism of the models' personalities.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update [69.59482029810198]
CLOVA is a Closed-Loop Visual Assistant that operates within a framework encompassing inference, reflection, and learning phases.
Results demonstrate that CLOVA surpasses existing tool-usage methods by 5% in visual question answering and multiple-image reasoning, by 10% in knowledge tagging, and by 20% in image editing.
arXiv Detail & Related papers (2023-12-18T03:34:07Z) - A Large Language Model Approach to Educational Survey Feedback Analysis [0.0]
This paper assesses the potential for the large language models (LLMs) GPT-4 and GPT-3.5 to aid in deriving insight from education feedback surveys.
arXiv Detail & Related papers (2023-09-29T17:57:23Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Aligning Large Language Models with Human: A Survey [53.6014921995006]
Large Language Models (LLMs) trained on extensive textual corpora have emerged as leading solutions for a broad array of Natural Language Processing (NLP) tasks.
Despite their notable performance, these models are prone to certain limitations such as misunderstanding human instructions, generating potentially biased content, or factually incorrect information.
This survey presents a comprehensive overview of these alignment technologies, including the following aspects.
arXiv Detail & Related papers (2023-07-24T17:44:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.