Closing the Loop: An Instructor-in-the-Loop AI Assistance System for Supporting Student Help-Seeking in Programming Education
- URL: http://arxiv.org/abs/2510.14457v1
- Date: Thu, 16 Oct 2025 08:57:05 GMT
- Title: Closing the Loop: An Instructor-in-the-Loop AI Assistance System for Supporting Student Help-Seeking in Programming Education
- Authors: Tung Phung, Heeryung Choi, Mengyan Wu, Christopher Brooks, Sumit Gulwani, Adish Singla,
- Abstract summary: We present a hybrid help framework that integrates AI-generated hints with an escalation mechanism.<n>We observed that out of the total 673 AI-generated hints, students rated 146 (22%) as unhelpful.<n>This finding suggests that when AI support fails, even instructors with expertise may need to pay greater attention to avoid making mistakes.
- Score: 21.729610894273563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Timely and high-quality feedback is essential for effective learning in programming courses; yet, providing such support at scale remains a challenge. While AI-based systems offer scalable and immediate help, their responses can occasionally be inaccurate or insufficient. Human instructors, in contrast, may bring more valuable expertise but are limited in time and availability. To address these limitations, we present a hybrid help framework that integrates AI-generated hints with an escalation mechanism, allowing students to request feedback from instructors when AI support falls short. This design leverages the strengths of AI for scale and responsiveness while reserving instructor effort for moments of greatest need. We deployed this tool in a data science programming course with 82 students. We observe that out of the total 673 AI-generated hints, students rated 146 (22%) as unhelpful. Among those, only 16 (11%) of the cases were escalated to the instructors. A qualitative investigation of instructor responses showed that those feedback instances were incorrect or insufficient roughly half of the time. This finding suggests that when AI support fails, even instructors with expertise may need to pay greater attention to avoid making mistakes. We will publicly release the tool for broader adoption and enable further studies in other classrooms. Our work contributes a practical approach to scaling high-quality support and informs future efforts to effectively integrate AI and humans in education.
Related papers
- An Experience Report on a Pedagogically Controlled, Curriculum-Constrained AI Tutor for SE Education [4.976713294177978]
This paper presents the design and pilot evaluation of RockStartIT Tutor, an AI-powered assistant developed for a digital programming and computational thinking course within the RockStartIT initiative.<n> Powered by GPT-4 via OpenAI's Assistant API, the tutor employs a novel prompting strategy and a modular, semantically tagged knowledge base to deliver context-aware, personalized, and curriculum-constrained support for secondary school students.
arXiv Detail & Related papers (2025-12-08T12:54:37Z) - Training LLM Agents to Empower Humans [67.80021254324294]
We propose a new approach to tuning assistive language models based on maximizing the human's empowerment.<n>Our empowerment-maximizing method, Empower, only requires offline text data.<n>We show that agents trained with Empower increase the success rate of a simulated human programmer on challenging coding questions by an average of 192%.
arXiv Detail & Related papers (2025-10-15T16:09:33Z) - New Kid in the Classroom: Exploring Student Perceptions of AI Coding Assistants [0.0]
This study investigates how AI tools are shaping the experiences of novice programmers in an introductory programming course.<n>Students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase.<n>However, a noticeable difficulty emerged when students were asked to work unaided, pointing to potential overreliance and gaps in foundational knowledge transfer.
arXiv Detail & Related papers (2025-06-26T05:59:23Z) - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv Detail & Related papers (2025-06-09T18:37:14Z) - Sakshm AI: Advancing AI-Assisted Coding Education for Engineering Students in India Through Socratic Tutoring and Comprehensive Feedback [1.9841192743072902]
Existing AI tools for programming education struggle with key challenges, including the lack of Socratic guidance.<n>This study examines 1170 registered participants, analyzing platform logs, engagement trends, and problem-solving behavior to assess Sakshm AI's impact.
arXiv Detail & Related papers (2025-03-16T12:13:29Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [176.39275404745098]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.<n>GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.<n>Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - Learning Task Decomposition to Assist Humans in Competitive Programming [90.4846613669734]
We introduce a novel objective for learning task decomposition, termed value (AssistV)<n>We collect a dataset of human repair experiences on different decomposed solutions.<n>Under 177 hours of human study, our method enables non-experts to solve 33.3% more problems, speeds them up by 3.3x, and empowers them to match unassisted experts.
arXiv Detail & Related papers (2024-06-07T03:27:51Z) - Desirable Characteristics for AI Teaching Assistants in Programming Education [2.9131215715703385]
Digital teaching assistants have emerged as an appealing and scalable way to provide instant, equitable, round-the-clock support.
Our results highlight that students value such tools for their ability to provide instant, engaging support.
They also expressed a strong preference for features that enable them to retain autonomy in their learning journey.
arXiv Detail & Related papers (2024-05-23T05:03:49Z) - CourseAssist: Pedagogically Appropriate AI Tutor for Computer Science Education [1.052788652996288]
This poster introduces CourseAssist, a novel LLM-based tutoring system tailored for computer science education.
Unlike generic LLM systems, CourseAssist uses retrieval-augmented generation, user intent classification, and question decomposition to align AI responses with specific course materials and learning objectives.
arXiv Detail & Related papers (2024-05-01T20:43:06Z) - The Responsible Development of Automated Student Feedback with Generative AI [6.008616775722921]
Recent advancements in AI, particularly with large language models (LLMs), present new opportunities to deliver scalable, repeatable, and instant feedback.<n>However, implementing these technologies also introduces a host of ethical considerations that must thoughtfully be addressed.<n>One of the core advantages of AI systems is their ability to automate routine and mundane tasks, potentially freeing up human educators for more nuanced work.<n>However, the ease of automation risks a tyranny of the majority'', where the diverse needs of minority or unique learners are overlooked.
arXiv Detail & Related papers (2023-08-29T14:29:57Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.