Structured Prompts, Better Outcomes? Exploring the Effects of a Structured Interface with ChatGPT in a Graduate Robotics Course
- URL: http://arxiv.org/abs/2507.07767v1
- Date: Thu, 10 Jul 2025 13:50:07 GMT
- Title: Structured Prompts, Better Outcomes? Exploring the Effects of a Structured Interface with ChatGPT in a Graduate Robotics Course
- Authors: Jerome Brender, Laila El-Hamamsy, Kim Uittenhove, Francesco Mondada, Engin Bumbacher,
- Abstract summary: This study evaluates the impact of a structured GPT platform designed to promote 'good' prompting behavior.<n>We analyzed student perception (pre-post surveys), prompting behavior (logs), performance (task scores), and learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior research shows that how students engage with Large Language Models (LLMs) influences their problem-solving and understanding, reinforcing the need to support productive LLM-uses that promote learning. This study evaluates the impact of a structured GPT platform designed to promote 'good' prompting behavior with data from 58 students in a graduate-level robotics course. The students were assigned to either an intervention group using the structured platform or a control group using ChatGPT freely for two practice lab sessions, before a third session where all students could freely use ChatGPT. We analyzed student perception (pre-post surveys), prompting behavior (logs), performance (task scores), and learning (pre-post tests). Although we found no differences in performance or learning between groups, we identified prompting behaviors - such as having clear prompts focused on understanding code - that were linked with higher learning gains and were more prominent when students used the structured platform. However, such behaviors did not transfer once students were no longer constrained to use the structured platform. Qualitative survey data showed mixed perceptions: some students perceived the value of the structured platform, but most did not perceive its relevance and resisted changing their habits. These findings contribute to ongoing efforts to identify effective strategies for integrating LLMs into learning and question the effectiveness of bottom-up approaches that temporarily alter user interfaces to influence students' interaction. Future research could instead explore top-down strategies that address students' motivations and explicitly demonstrate how certain interaction patterns support learning.
Related papers
- Short-Term Gains, Long-Term Gaps: The Impact of GenAI and Search Technologies on Retention [1.534667887016089]
This study investigates how GenAI (ChatGPT), search engines (Google), and e-textbooks influence student performance across tasks of varying cognitive complexity.<n>ChatGPT and Google groups outperformed the control group in immediate assessments for lower-order cognitive tasks.<n>While AI-driven tools facilitate immediate performance, they do not inherently reinforce long-term retention unless supported by structured learning strategies.
arXiv Detail & Related papers (2025-07-10T00:44:50Z) - Can Large Language Models Help Students Prove Software Correctness? An Experimental Study with Dafny [79.56218230251953]
Students in computing education increasingly use large language models (LLMs) such as ChatGPT.<n>This paper investigates how students interact with an LLM when solving formal verification exercises in Dafny.
arXiv Detail & Related papers (2025-06-27T16:34:13Z) - An Empirical Study of Federated Prompt Learning for Vision Language Model [50.73746120012352]
This paper systematically investigates behavioral differences between language prompt learning and vision prompt learning.<n>We conduct experiments to evaluate the impact of various fl and prompt configurations, such as client scale, aggregation strategies, and prompt length.<n>We explore strategies for enhancing prompt learning in complex scenarios where label skew and domain shift coexist.
arXiv Detail & Related papers (2025-05-29T03:09:15Z) - Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines [9.834055425277874]
This study investigates learner-AI interactions through an educational experiment in which participants receive structured guidance on effective prompting.<n>To assess user behavior and prompting efficacy, we analyze a dataset of 642 interactions from 107 users.<n>Our findings provide a deeper understanding of how users engage with Large Language Models and the role of structured prompting guidance in enhancing AI-assisted communication.
arXiv Detail & Related papers (2025-04-10T15:20:43Z) - Improving Question Embeddings with Cognitiv Representation Optimization for Knowledge Tracing [77.14348157016518]
The Knowledge Tracing (KT) aims to track changes in students' knowledge status and predict their future answers based on their historical answer records.<n>Current research on KT modeling focuses on predicting student' future performance based on existing, unupdated records of student learning interactions.<n>We propose a Cognitive Representation Optimization for Knowledge Tracing model, which utilizes a dynamic programming algorithm to optimize structure of cognitive representations.
arXiv Detail & Related papers (2025-04-05T09:32:03Z) - DASKT: A Dynamic Affect Simulation Method for Knowledge Tracing [51.665582274736785]
Knowledge Tracing (KT) predicts future performance by students' historical computation, and understanding students' affective states can enhance the effectiveness of KT.<n>We propose Affect Dynamic Knowledge Tracing (DASKT) to explore the impact of various student affective states on their knowledge states.<n>Our research highlights a promising avenue for future studies, focusing on achieving high interpretability and accuracy.
arXiv Detail & Related papers (2025-01-18T10:02:10Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.<n>We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.<n>We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Investigation of the effectiveness of applying ChatGPT in Dialogic Teaching Using Electroencephalography [6.34494999013996]
Large language models (LLMs) possess the capability to interpret knowledge, answer questions, and consider context.
This research recruited 34 undergraduate students as participants, who were randomly divided into two groups.
The experimental group engaged in dialogic teaching using ChatGPT, while the control group interacted with human teachers.
arXiv Detail & Related papers (2024-03-25T12:23:12Z) - Unreflected Acceptance -- Investigating the Negative Consequences of
ChatGPT-Assisted Problem Solving in Physics Education [4.014729339820806]
The impact of large language models (LLMs) on sensitive areas of everyday life, such as education, remains unclear.
Our work focuses on higher physics education and examines problem solving strategies.
arXiv Detail & Related papers (2023-08-21T16:14:34Z) - CLGT: A Graph Transformer for Student Performance Prediction in
Collaborative Learning [6.140954034246379]
We present an extended graph transformer framework for collaborative learning (CLGT) for evaluating and predicting the performance of students.
The experimental results confirm that the proposed CLGT outperforms the baseline models in terms of performing predictions based on the real-world datasets.
arXiv Detail & Related papers (2023-07-30T09:54:30Z) - Seminar Learning for Click-Level Weakly Supervised Semantic Segmentation [149.9226057885554]
We propose seminar learning, a new learning paradigm for semantic segmentation with click-level supervision.
The rationale of seminar learning is to leverage the knowledge from different networks to compensate for insufficient information provided in click-level annotations.
Experimental results demonstrate the effectiveness of seminar learning, which achieves the new state-of-the-art performance of 72.51%.
arXiv Detail & Related papers (2021-08-30T17:27:43Z) - Peer-inspired Student Performance Prediction in Interactive Online
Question Pools with Graph Neural Network [56.62345811216183]
We propose a novel approach using Graph Neural Networks (GNNs) to achieve better student performance prediction in interactive online question pools.
Specifically, we model the relationship between students and questions using student interactions to construct the student-interaction-question network.
We evaluate the effectiveness of our approach on a real-world dataset consisting of 104,113 mouse trajectories generated in the problem-solving process of over 4000 students on 1631 questions.
arXiv Detail & Related papers (2020-08-04T14:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.