GPT-4 as a Homework Tutor can Improve Student Engagement and Learning Outcomes
- URL: http://arxiv.org/abs/2409.15981v1
- Date: Tue, 24 Sep 2024 11:22:55 GMT
- Title: GPT-4 as a Homework Tutor can Improve Student Engagement and Learning Outcomes
- Authors: Alessandro Vanzo, Sankalan Pal Chowdhury, Mrinmaya Sachan,
- Abstract summary: We developed a prompting strategy that enables GPT-4 to conduct interactive homework sessions for high-school students learning English as a second language.
We carried out a Randomized Controlled Trial (RCT) in four high-school classes, replacing traditional homework with GPT-4 homework sessions for the treatment group.
We observed significant improvements in learning outcomes, specifically a greater gain in grammar, and student engagement.
- Score: 80.60912258178045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work contributes to the scarce empirical literature on LLM-based interactive homework in real-world educational settings and offers a practical, scalable solution for improving homework in schools. Homework is an important part of education in schools across the world, but in order to maximize benefit, it needs to be accompanied with feedback and followup questions. We developed a prompting strategy that enables GPT-4 to conduct interactive homework sessions for high-school students learning English as a second language. Our strategy requires minimal efforts in content preparation, one of the key challenges of alternatives like home tutors or ITSs. We carried out a Randomized Controlled Trial (RCT) in four high-school classes, replacing traditional homework with GPT-4 homework sessions for the treatment group. We observed significant improvements in learning outcomes, specifically a greater gain in grammar, and student engagement. In addition, students reported high levels of satisfaction with the system and wanted to continue using it after the end of the RCT.
Related papers
- Generative AI for Enhancing Active Learning in Education: A Comparative Study of GPT-3.5 and GPT-4 in Crafting Customized Test Questions [2.0411082897313984]
This study investigates how LLMs, specifically GPT-3.5 and GPT-4, can develop tailored questions for Grade 9 math.
By utilizing an iterative method, these models adjust questions based on difficulty and content, responding to feedback from a simulated'student' model.
arXiv Detail & Related papers (2024-06-20T00:25:43Z) - Leveraging Lecture Content for Improved Feedback: Explorations with GPT-4 and Retrieval Augmented Generation [0.0]
This paper presents the use of Retrieval Augmented Generation to improve the feedback generated by Large Language Models for programming tasks.
corresponding lecture recordings were transcribed and made available to the Large Language Model GPT-4 as external knowledge source.
The purpose of this is to prevent hallucinations and to enforce the use of the technical terms and phrases from the lecture.
arXiv Detail & Related papers (2024-05-05T18:32:06Z) - RECIPE4U: Student-ChatGPT Interaction Dataset in EFL Writing Education [15.253081304714101]
We present RECIPE4U, a dataset sourced from a semester-long experiment with 212 college students in English as Foreign Language (EFL) writing courses.
During the study, students engaged in dialogues with ChatGPT to revise their essays. RECIPE4U includes comprehensive records of these interactions, including conversation logs, students' intent, students' self-rated satisfaction, and students' essay edit histories.
arXiv Detail & Related papers (2024-03-13T05:51:57Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Using ChatGPT for Science Learning: A Study on Pre-service Teachers'
Lesson Planning [0.7416846035207727]
This study analyzed lesson plans developed by 29 pre-service elementary teachers from a Korean university.
14 types of teaching and learning methods/strategies were identified in the lesson plans.
The study identified both appropriate and inappropriate use cases of ChatGPT in lesson planning.
arXiv Detail & Related papers (2024-01-18T22:52:04Z) - Prompt Engineering or Fine Tuning: An Empirical Assessment of Large
Language Models in Automated Software Engineering Tasks [8.223311621898983]
GPT-4 with conversational prompts showed drastic improvement compared to GPT-4 with automatic prompting strategies.
fully automated prompt engineering with no human in the loop requires more study and improvement.
arXiv Detail & Related papers (2023-10-11T00:21:00Z) - Does Starting Deep Learning Homework Earlier Improve Grades? [63.20583929886827]
Students who start a homework assignment earlier and spend more time on it should receive better grades on the assignment.
Existing literature on the impact of time spent on homework is not clear-cut and comes mostly from K-12 education.
We develop a hierarchical Bayesian model to help make principled conclusions about the impact on student success.
arXiv Detail & Related papers (2023-09-30T09:34:30Z) - PapagAI:Automated Feedback for Reflective Essays [48.4434976446053]
We present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system.
The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers.
arXiv Detail & Related papers (2023-07-10T11:05:51Z) - Disadvantaged students increase their academic performance through
collective intelligence exposure in emergency remote learning due to COVID 19 [105.54048699217668]
During the COVID-19 crisis, educational institutions worldwide shifted from face-to-face instruction to emergency remote teaching (ERT) modalities.
We analyzed data on 7,528 undergraduate students and found that cooperative and consensus dynamics among students in discussion forums positively affect their final GPA.
Using natural language processing, we show that first-year students with low academic performance during high school are exposed to more content-intensive posts in discussion forums.
arXiv Detail & Related papers (2022-03-10T20:23:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.