Reflection-Satisfaction Tradeoff: Investigating Impact of Reflection on Student Engagement with AI-Generated Programming Hints
- URL: http://arxiv.org/abs/2512.04630v1
- Date: Thu, 04 Dec 2025 10:01:33 GMT
- Title: Reflection-Satisfaction Tradeoff: Investigating Impact of Reflection on Student Engagement with AI-Generated Programming Hints
- Authors: Heeryung Choi, Tung Phung, Mengyan Wu, Adish Singla, Christopher Brooks,
- Abstract summary: One promising approach involves pairing AI-generated hints with reflection prompts.<n>This study investigates the interplay between AI-generated hints and different designs of reflection prompts in an online programming course.
- Score: 16.757426904379212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI tools, such as AI-generated hints, are increasingly integrated into programming education to offer timely, personalized support. However, little is known about how to effectively leverage these hints while ensuring autonomous and meaningful learning. One promising approach involves pairing AI-generated hints with reflection prompts, asking students to review and analyze their learning, when they request hints. This study investigates the interplay between AI-generated hints and different designs of reflection prompts in an online introductory programming course. We conducted a two-trial field experiment. In Trial 1, students were randomly assigned to receive prompts either before or after receiving hints, or no prompt at all. Each prompt also targeted one of three SRL phases: planning, monitoring, and evaluation. In Trial 2, we examined two types of prompt guidance: directed (offering more explicit and structured guidance) and open (offering more general and less constrained guidance). Findings show that students in the before-hint (RQ1), planning (RQ2), and directed (RQ3) prompt groups produced higher-quality reflections but reported lower satisfaction with AI-generated hints than those in other conditions. Immediate performance did not differ across conditions. This negative relationship between reflection quality and hint satisfaction aligns with previous work on student mental effort and satisfaction. Our results highlight the need to reconsider how AI models are trained and evaluated for education, as prioritizing user satisfaction can undermine deeper learning.
Related papers
- Thinking Forward and Backward: Multi-Objective Reinforcement Learning for Retrieval-Augmented Reasoning [137.33138614095435]
Retrieval-augmented generation (RAG) has proven to be effective in mitigating hallucinations in large language models.<n>Recent efforts have incorporated search-based interactions into RAG, enabling iterative reasoning with real-time retrieval.<n>We propose Bi-RAR, a novel retrieval-augmented reasoning framework that evaluates each intermediate step jointly in both forward and backward directions.
arXiv Detail & Related papers (2025-11-12T08:29:39Z) - Directive, Metacognitive or a Blend of Both? A Comparison of AI-Generated Feedback Types on Student Engagement, Confidence, and Outcomes [1.8839714322633465]
This study presents a semester-long randomised controlled trial with 329 students in an introductory design and programming course using an adaptive educational platform.<n>Participants were assigned to receive directive, metacognitive, or hybrid AI-generated feedback that blended elements of both directive and metacognitive feedback.<n>Results showed that revision behaviour differed across feedback conditions, with Hybrid prompting the most revisions compared to Directive and Metacognitive.
arXiv Detail & Related papers (2025-10-22T15:31:21Z) - Bridging Gaps Between Student and Expert Evaluations of AI-Generated Programming Hints [21.254611931654132]
We study mismatches in perceived hint quality from students' and experts' perspectives.<n>We propose and discuss preliminary results on potential methods to bridge these gaps.
arXiv Detail & Related papers (2025-09-03T12:38:35Z) - New Kid in the Classroom: Exploring Student Perceptions of AI Coding Assistants [0.0]
This study investigates how AI tools are shaping the experiences of novice programmers in an introductory programming course.<n>Students perceived AI tools as helpful for grasping code concepts and boosting their confidence during the initial development phase.<n>However, a noticeable difficulty emerged when students were asked to work unaided, pointing to potential overreliance and gaps in foundational knowledge transfer.
arXiv Detail & Related papers (2025-06-26T05:59:23Z) - Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models [67.87579664988199]
TON is a two-stage training strategy for vision-language models (VLMs)<n>It introduces a think-or-not format that serves as a cold start for selective reasoning.<n>TON can reduce the completion length by up to 90% compared to vanilla GRPO.
arXiv Detail & Related papers (2025-05-22T16:13:29Z) - Supporting Students' Reading and Cognition with AI [12.029238454394445]
We analyzed text from 124 sessions with AI tools to understand users' reading processes and cognitive engagement.<n>We propose design implications for future AI reading-support systems, including structured scaffolds for lower-level cognitive tasks.<n>We advocate for adaptive, human-in-the-loop features that allow students and instructors to tailor their reading experiences with AI.
arXiv Detail & Related papers (2025-04-07T17:51:27Z) - Resurrecting Socrates in the Age of AI: A Study Protocol for Evaluating a Socratic Tutor to Support Research Question Development in Higher Education [0.0]
This protocol lays out a study grounded in constructivist learning theory to evaluate a novel AI-based Socratic Tutor.<n>The tutor engages students through iterative, reflective questioning, aiming to promote System 2 thinking.<n>This study aims to advance the understanding of how generative AI can be pedagogically aligned to support, not replace, human cognition.
arXiv Detail & Related papers (2025-04-05T00:49:20Z) - Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective [77.94874338927492]
OpenAI has claimed that the main techinique behinds o1 is the reinforcement learning.<n>This paper analyzes the roadmap to achieving o1 from the perspective of reinforcement learning.
arXiv Detail & Related papers (2024-12-18T18:24:47Z) - Bidirectional Awareness Induction in Autoregressive Seq2Seq Models [47.82947878753809]
Bidirectional Awareness Induction (BAI) is a training method that leverages a subset of elements in the network, the Pivots, to perform bidirectional learning without breaking the autoregressive constraints.
In particular, we observed an increase of up to 2.4 CIDEr in Image-Captioning, 4.96 BLEU in Neural Machine Translation, and 1.16 ROUGE in Text Summarization compared to the respective baselines.
arXiv Detail & Related papers (2024-08-25T23:46:35Z) - Students' Perceptions and Preferences of Generative Artificial
Intelligence Feedback for Programming [15.372316943507506]
We generated automated feedback using the ChatGPT API for four lab assignments in an introductory computer science class.
Students perceived the feedback as aligning well with formative feedback guidelines established by Shute.
Students generally expected specific and corrective feedback with sufficient code examples, but had diverged opinions on the tone of the feedback.
arXiv Detail & Related papers (2023-12-17T22:26:53Z) - SCP: Soft Conditional Prompt Learning for Aerial Video Action Recognition [48.456059482589495]
We present a new learning approach, Soft Conditional Prompt Learning ( SCP), which leverages the strengths of prompt learning for aerial video action recognition.
Our approach is designed to predict the action of each agent by helping the models focus on the descriptions or instructions associated with actions in the input videos for aerial/robot visual perception.
arXiv Detail & Related papers (2023-05-21T11:51:09Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Learning to Ask Conversational Questions by Optimizing Levenshtein
Distance [83.53855889592734]
We introduce a Reinforcement Iterative Sequence Editing (RISE) framework that optimize the minimum Levenshtein distance (MLD) through explicit editing actions.
RISE is able to pay attention to tokens that are related to conversational characteristics.
Experimental results on two benchmark datasets show that RISE significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-06-30T08:44:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.