Student Engagement with GenAI's Tutoring Feedback: A Mixed Methods Study
- URL: http://arxiv.org/abs/2509.22974v1
- Date: Fri, 26 Sep 2025 22:17:20 GMT
- Title: Student Engagement with GenAI's Tutoring Feedback: A Mixed Methods Study
- Authors: Sven Jacobs, Jan Haas, Natalie Kiesler,
- Abstract summary: The research aims to: (1) identify what students think when they engage with the tutoring feedback components, and (2) explore the relations between the feedback components, students' visual attention, verbalized thoughts, and their immediate actions as part of the problem-solving process.<n>The analysis of students' thoughts while engaging with 380 feedback components revealed four main themes: students express understanding or disagreement, additional information needed, and students explicitly judge the feedback.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How students utilize immediate tutoring feedback in programming education depends on various factors. Among them are the feedback quality, but also students' engagement, i.e., their perception, interpretation, and use of feedback. However, there is limited research on how students engage with various types of tutoring feedback. For this reason, we developed a learning environment that provides students with Python programming tasks and various types of immediate, AI-generated tutoring feedback. The feedback is displayed within four components. Using a mixed-methods approach (think-aloud study and eye-tracking), we conducted a study with 20 undergraduate students enrolled in an introductory programming course. Our research aims to: (1) identify what students think when they engage with the tutoring feedback components, and (2) explore the relations between the tutoring feedback components, students' visual attention, verbalized thoughts, and their immediate actions as part of the problem-solving process. The analysis of students' thoughts while engaging with 380 feedback components revealed four main themes: students express understanding or disagreement, additional information needed, and students explicitly judge the feedback. Exploring the relations between feedback, students' attention, thoughts, and actions showed a clear relationship. While expressions of understanding were associated with improvements, expressions of disagreement or need for additional information prompted students to collect another feedback component rather than act on the current information. These insights into students' engagement and decision-making processes contribute to an increased understanding of tutoring feedback and how students engage with it. Thereby, this work has implications for tool developers and educators facilitating feedback.
Related papers
- Exposía: Academic Writing Assessment of Exposés and Peer Feedback [56.428320613219306]
We present Exposa, the first public dataset that connects writing and feedback assessment in higher education.<n>We use Exposa to benchmark state-of-the-art open-source large language models (LLMs) for two tasks: automated scoring of (1) the proposals and (2) the student reviews.
arXiv Detail & Related papers (2026-01-10T11:33:26Z) - Understanding Student Interaction with AI-Powered Next-Step Hints: Strategies and Challenges [1.446446435461625]
Next-step hint feedback provides students with actionable steps to progress towards solving programming tasks.<n>This study investigates how students interact with an AI-driven next-step hint system in an in-IDE learning environment.
arXiv Detail & Related papers (2025-11-09T12:56:34Z) - Automatic Feedback Generation for Short Answer Questions using Answer Diagnostic Graphs [21.965223446869064]
Short-reading comprehension questions help students understand text structure but lack effective feedback.<n>Students struggle to identify and correct errors, while manual feedback creation is labor-intensive.<n>We propose a system that generates feedback for student responses.
arXiv Detail & Related papers (2025-01-27T04:49:10Z) - "My Grade is Wrong!": A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays [6.810086342993699]
This paper introduces CAELF, a Contestable AI Empowered LLM Framework for automating interactive feedback.
CAELF allows students to query, challenge, and clarify their feedback by integrating a multi-agent system with computational argumentation.
A case study on 500 critical thinking essays with user studies demonstrates that CAELF significantly improves interactive feedback.
arXiv Detail & Related papers (2024-09-11T17:59:01Z) - Representational Alignment Supports Effective Machine Teaching [81.19197059407121]
GRADE is a new controlled experimental setting to study pedagogy and representational alignment.<n>We find that improved representational alignment with a student improves student learning outcomes.<n>However, this effect is moderated by the size and representational diversity of the class being taught.
arXiv Detail & Related papers (2024-06-06T17:48:24Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Enhancing Students' Learning Process Through Self-Generated Tests [0.0]
This paper describes an educational experiment aimed at the promotion of students' autonomous learning.
The main idea is to make the student feel part of the evaluation process by including students' questions in the evaluation exams.
Questions uploaded by students are visible to every enrolled student as well as to each involved teacher.
arXiv Detail & Related papers (2024-03-21T09:49:33Z) - Improving the Validity of Automatically Generated Feedback via Reinforcement Learning [46.667783153759636]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)<n>Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Feedback and Engagement on an Introductory Programming Module [0.0]
We ran a study on engagement and achievement for a first year undergraduate programming module which used an online learning environment containing tasks which generate automated feedback.
We gathered quantitative data on engagement and achievement which allowed us to split the cohort into 6 groups.
We then ran interviews with students after the end of the module to produce qualitative data on perceptions of what feedback is, how useful it is, the uses made of it, and how it bears on engagement.
arXiv Detail & Related papers (2022-01-04T16:53:09Z) - A literature survey on student feedback assessment tools and their usage
in sentiment analysis [0.0]
We evaluate the effectiveness of various in-class feedback assessment methods such as Kahoot!, Mentimeter, Padlet, and polling.
We propose a sentiment analysis model for extracting the explicit suggestions from the students' qualitative feedback comments.
arXiv Detail & Related papers (2021-09-09T06:56:30Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Facial Feedback for Reinforcement Learning: A Case Study and Offline
Analysis Using the TAMER Framework [51.237191651923666]
We investigate the potential of agent learning from trainers' facial expressions via interpreting them as evaluative feedback.
With designed CNN-RNN model, our analysis shows that telling trainers to use facial expressions and competition can improve the accuracies for estimating positive and negative feedback.
Our results with a simulation experiment show that learning solely from predicted feedback based on facial expressions is possible.
arXiv Detail & Related papers (2020-01-23T17:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.