Level Up Peer Review in Education: Investigating genAI-driven Gamification system and its influence on Peer Feedback Effectiveness
- URL: http://arxiv.org/abs/2504.02962v1
- Date: Thu, 03 Apr 2025 18:30:25 GMT
- Title: Level Up Peer Review in Education: Investigating genAI-driven Gamification system and its influence on Peer Feedback Effectiveness
- Authors: Rafal Wlodarski, Leonardo da Silva Sousa, Allison Connell Pensky,
- Abstract summary: This paper introduces Socratique, a gamified peer-assessment platform integrated with Generative AI (GenAI) assistance.<n>By incorporating game elements, Socratique aims to motivate students to provide more feedback.<n>Students in the treatment group provided significantly more voluntary feedback, with higher scores on clarity, relevance, and specificity.
- Score: 0.8087870525861938
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In software engineering (SE), the ability to review code and critique designs is essential for professional practice. However, these skills are rarely emphasized in formal education, and peer feedback quality and engagement can vary significantly among students. This paper introduces Socratique, a gamified peer-assessment platform integrated with Generative AI (GenAI) assistance, designed to develop students' peer-review skills in a functional programming course. By incorporating game elements, Socratique aims to motivate students to provide more feedback, while the GenAI assistant offers real-time support in crafting high quality, constructive comments. To evaluate the impact of this approach, we conducted a randomized controlled experiment with master's students comparing a treatment group with a gamified, GenAI-driven setup against a control group with minimal gamification. Results show that students in the treatment group provided significantly more voluntary feedback, with higher scores on clarity, relevance, and specificity - all key aspects of effective code and design reviews. This study provides evidence for the effectiveness of combining gamification and AI to improve peer review processes, with implications for fostering review-related competencies in software engineering curricula.
Related papers
- Evaluating the AI-Lab Intervention: Impact on Student Perception and Use of Generative AI in Early Undergraduate Computer Science Courses [0.0]
Generative AI (GenAI) is rapidly entering computer science education.
Concerns about overreliance coexist with a gap in research on structured scaffolding to guide tool use in formal courses.
This study examines the impact of a dedicated "AI-Lab" intervention on undergraduate students.
arXiv Detail & Related papers (2025-04-30T18:12:42Z) - Evaluating Machine Expertise: How Graduate Students Develop Frameworks for Assessing GenAI Content [1.967444231154626]
This paper examines how graduate students develop frameworks for evaluating machine-generated expertise in web-based interactions with large language models (LLMs)
Our findings reveal that students construct evaluation frameworks shaped by three main factors: professional identity, verification capabilities, and system navigation experience.
arXiv Detail & Related papers (2025-04-24T22:24:14Z) - Beyond Detection: Designing AI-Resilient Assessments with Automated Feedback Tool to Foster Critical Thinking [0.0]
This research proposes a proactive, AI-resilient solution based on assessment design rather than detection.<n>It introduces a web-based Python tool that integrates Bloom's taxonomy with advanced natural language processing techniques.<n>It helps educators determine whether a task targets lower-order thinking such as recall and summarization or higher-order skills such as analysis, evaluation, and creation.
arXiv Detail & Related papers (2025-03-30T23:13:00Z) - A Zero-Shot LLM Framework for Automatic Assignment Grading in Higher Education [0.6141800972050401]
We propose a Zero-Shot Large Language Model (LLM)-Based Automated Assignment Grading (AAG) system.<n>This framework leverages prompt engineering to evaluate both computational and explanatory student responses without requiring additional training or fine-tuning.<n>The AAG system delivers tailored feedback that highlights individual strengths and areas for improvement, thereby enhancing student learning outcomes.
arXiv Detail & Related papers (2025-01-24T08:01:41Z) - Code Collaborate: Dissecting Team Dynamics in First-Semester Programming Students [3.0294711465150006]
The study highlights the collaboration trends that emerge as first-semester students develop a 2D game project.
Results indicate that students often slightly overestimate their contributions, with more engaged individuals more likely to acknowledge mistakes.
Team performance shows no significant variation based on nationality or gender composition, though teams that disbanded frequently consisted of lone wolves.
arXiv Detail & Related papers (2024-10-28T11:42:05Z) - Personalised Feedback Framework for Online Education Programmes Using Generative AI [0.0]
This paper presents an alternative feedback framework which extends the capabilities of ChatGPT by integrating embeddings.
As part of the study, we proposed and developed a proof of concept solution, achieving an efficacy rate of 90% and 100% for open-ended and multiple-choice questions.
arXiv Detail & Related papers (2024-10-14T22:35:40Z) - BoilerTAI: A Platform for Enhancing Instruction Using Generative AI in Educational Forums [0.0]
This paper describes a practical, scalable platform that seamlessly integrates Generative AI (GenAI) with online educational forums.
The platform empowers instructional staff to efficiently manage, refine, and approve responses by facilitating interaction between student posts and a Large Language Model (LLM)
arXiv Detail & Related papers (2024-09-20T04:00:30Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Towards Goal-oriented Intelligent Tutoring Systems in Online Education [69.06930979754627]
We propose a new task, named Goal-oriented Intelligent Tutoring Systems (GITS)
GITS aims to enable the student's mastery of a designated concept by strategically planning a customized sequence of exercises and assessment.
We propose a novel graph-based reinforcement learning framework, named Planning-Assessment-Interaction (PAI)
arXiv Detail & Related papers (2023-12-03T12:37:16Z) - PapagAI:Automated Feedback for Reflective Essays [48.4434976446053]
We present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system.
The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers.
arXiv Detail & Related papers (2023-07-10T11:05:51Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Modelling Assessment Rubrics through Bayesian Networks: a Pragmatic Approach [40.06500618820166]
This paper presents an approach to deriving a learner model directly from an assessment rubric.
We illustrate how the approach can be applied to automatize the human assessment of an activity developed for testing computational thinking skills.
arXiv Detail & Related papers (2022-09-07T10:09:12Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.