Exploring the Role of Automated Feedback in Programming Education: A Systematic Literature Review
- URL: http://arxiv.org/abs/2602.00089v1
- Date: Fri, 23 Jan 2026 05:20:37 GMT
- Title: Exploring the Role of Automated Feedback in Programming Education: A Systematic Literature Review
- Authors: Yeonji Jung, Yunseo Lee, Jiyeong Bae, DoYong Kim, Heungsoo Choi, Minji Kang, Unggi Lee,
- Abstract summary: This systematic literature review synthesizes 61 empirical studies published by September 2024.<n>Findings reveal that most systems are fully automated, embedded within online platforms.<n>Few systems offer support for higher-order learning processes, interactive components, or learner agency.
- Score: 0.08376229126363229
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Automated feedback systems have become increasingly integral to programming education, where learners engage in iterative cycles of code construction, testing, and refinement. Despite its wider integration in practices and technical advancements into AI, research in this area remains fragmented, lacking synthesis across technological and instructional dimensions. This systematic literature review synthesizes 61 empirical studies published by September 2024, offering a conceptually grounded analysis of automated feedback systems across five dimensions: system architecture, pedagogical function, interaction mechanism, contextual deployment, and evaluation approach. Findings reveal that most systems are fully automated, embedded within online platforms, and primarily focused on error detection and code correctness. While recent developments incorporate adaptive features and large language models to enable more personalized and interactive feedback, few systems offer support for higher-order learning processes, interactive components, or learner agency. Moreover, evaluation practices tend to emphasize short-term performance gains, with limited attention to long-term outcomes or instructional integration. These findings call for a reimagining of automated feedback not as a technical add-on for error correction, but as a pedagogical scaffold that supports deeper, adaptive, and interactive learning.
Related papers
- Advances and Frontiers of LLM-based Issue Resolution in Software Engineering: A Comprehensive Survey [59.3507264893654]
Issue resolution is a complex Software Engineering task integral to real-world development.<n> benchmarks like SWE-bench revealed this task as profoundly difficult for large language models.<n>This paper presents a systematic survey of this emerging domain.
arXiv Detail & Related papers (2026-01-15T18:55:03Z) - Automated Feedback Generation for Undergraduate Mathematics: Development and Evaluation of an AI Teaching Assistant [0.0]
We present a system that processes free-form natural language input, handles a wide range of edge cases, and comments on the technical correctness of submitted proofs.<n>We show that by the metrics we evaluate, the quality of the feedback generated is comparable to that produced by human experts.<n>A version of our tool is deployed on the Imperial mathematics homework platform Lambda.
arXiv Detail & Related papers (2026-01-06T23:02:22Z) - A Survey on Feedback Types in Automated Programming Assessment Systems [3.9845307287664973]
This study investigates how different feedback mechanisms in APASs are perceived by students, and how effective they are in supporting problem-solving.<n>Results indicate that while students rate unit test feedback as the most helpful, AI-generated feedback leads to significantly better performances.
arXiv Detail & Related papers (2025-10-21T09:08:22Z) - AI-driven formative assessment and adaptive learning in data-science education: Evaluating an LLM-powered virtual teaching assistant [6.874351093155318]
VITA (Virtual Teaching Assistants) is an adaptive distributed learning platform that embeds a large language model (LLM)-powered bot (BotCaptain)<n>The paper describes an end-to-end data pipeline that transforms chat logs into Experience API (xAPI) statements, instructor dashboards that surface outliers for just-in-time intervention.<n>Future work will refine the platform's adaptive intelligence and examine applicability across varied educational settings.
arXiv Detail & Related papers (2025-09-17T11:27:45Z) - Enhancing tutoring systems by leveraging tailored promptings and domain knowledge with Large Language Models [2.5362697136900563]
AI-driven tools like ChatGPT and Intelligent Tutoring Systems (ITS) have enhanced learning experiences through personalisation and flexibility.<n>ITSs can adapt to individual learning needs and provide customised feedback based on a student's performance, cognitive state, and learning path.<n>Our research aims to address these gaps by integrating skill-aligned feedback via Retrieval Augmented Generation (RAG) into prompt engineering for Large Language Models (LLMs)
arXiv Detail & Related papers (2025-05-02T02:30:39Z) - A Survey on (M)LLM-Based GUI Agents [62.57899977018417]
Graphical User Interface (GUI) Agents have emerged as a transformative paradigm in human-computer interaction.<n>Recent advances in large language models and multimodal learning have revolutionized GUI automation across desktop, mobile, and web platforms.<n>This survey identifies key technical challenges, including accurate element localization, effective knowledge retrieval, long-horizon planning, and safety-aware execution control.
arXiv Detail & Related papers (2025-03-27T17:58:31Z) - LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models [50.259006481656094]
We present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models.
Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer.
We present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.
arXiv Detail & Related papers (2024-04-03T23:57:34Z) - IMTLab: An Open-Source Platform for Building, Evaluating, and Diagnosing
Interactive Machine Translation Systems [94.39110258587887]
We present IMTLab, an open-source end-to-end interactive machine translation (IMT) system platform.
IMTLab treats the whole interactive translation process as a task-oriented dialogue with a human-in-the-loop setting.
arXiv Detail & Related papers (2023-10-17T11:29:04Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Modelling Assessment Rubrics through Bayesian Networks: a Pragmatic Approach [40.06500618820166]
This paper presents an approach to deriving a learner model directly from an assessment rubric.
We illustrate how the approach can be applied to automatize the human assessment of an activity developed for testing computational thinking skills.
arXiv Detail & Related papers (2022-09-07T10:09:12Z) - Applying Machine Learning in Self-Adaptive Systems: A Systematic
Literature Review [15.953995937484176]
There is currently no systematic overview of the use of machine learning in self-adaptive systems.
We focus on self-adaptive systems that are based on a traditional Monitor-Analyze-Plan-Execute feedback loop (MAPE)
The research questions are centred on the problems that motivate the use of machine learning in self-adaptive systems, the key engineering aspects of learning in self-adaptation, and open challenges.
arXiv Detail & Related papers (2021-03-06T13:45:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.