61A Bot Report: AI Assistants in CS1 Save Students Homework Time and Reduce Demands on Staff. (Now What?)
- URL: http://arxiv.org/abs/2406.05600v3
- Date: Mon, 02 Dec 2024 03:51:34 GMT
- Title: 61A Bot Report: AI Assistants in CS1 Save Students Homework Time and Reduce Demands on Staff. (Now What?)
- Authors: J. D. Zamfirescu-Pereira, Laryn Qi, Björn Hartmann, John DeNero, Narges Norouzi,
- Abstract summary: GPT-4-based interactive homework assistant ("61A Bot") for students in a large CS1 course.
Over 2000 students made over 100,000 requests of our Bot across two semesters.
For students in the 50th-80th percentile, reductions can exceed 30 minutes per assignment, up to 50% less time than students at the same percentile rank in prior semesters.
- Score: 9.973179186668393
- License:
- Abstract: LLM-based chatbots enable students to get immediate, interactive help on homework assignments, but even a thoughtfully-designed bot may not serve all pedagogical goals. We report here on the development and deployment of a GPT-4-based interactive homework assistant ("61A Bot") for students in a large CS1 course; over 2000 students made over 100,000 requests of our Bot across two semesters. Our assistant offers one-shot, contextual feedback within the command-line "autograder" students use to test their code. Our Bot wraps student code in a custom prompt that supports our pedagogical goals and avoids providing solutions directly. Analyzing student feedback, questions, and autograder data, we find reductions in homework-related question rates in our course forum, as well as reductions in homework completion time when our Bot is available. For students in the 50th-80th percentile, reductions can exceed 30 minutes per assignment, up to 50% less time than students at the same percentile rank in prior semesters. Finally, we discuss these observations, potential impacts on student learning, and other potential costs and benefits of AI assistance in CS1.
Related papers
- How Do Programming Students Use Generative AI? [7.863638253070439]
We studied how programming students actually use generative AI tools like ChatGPT.
We observed two prevalent usage strategies: to seek knowledge about general concepts and to directly generate solutions.
Our findings indicate that concerns about potential decrease in programmers' agency and productivity with Generative AI are justified.
arXiv Detail & Related papers (2025-01-17T10:25:41Z) - Integrating AI Tutors in a Programming Course [0.0]
RAGMan is an LLM-powered tutoring system that can support a variety of course-specific and homework-specific AI tutors.
This paper describes the interactions the students had with the AI tutors, the students' feedback, and a comparative grade analysis.
arXiv Detail & Related papers (2024-07-14T00:42:39Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Using Assignment Incentives to Reduce Student Procrastination and
Encourage Code Review Interactions [2.1684358357001465]
This work presents an incentive system encouraging students to complete assignments many days before deadlines.
Completed assignments are code reviewed by staff for correctness and providing feedback, which results in more student-instructor interactions.
The incentives result in a change in student behavior with 45% of assignments completed early and 30% up to 4 days before the deadline.
arXiv Detail & Related papers (2023-11-25T22:17:40Z) - Automated Questions About Learners' Own Code Help to Detect Fragile
Knowledge [0.0]
Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works.
Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created.
arXiv Detail & Related papers (2023-06-28T14:49:16Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Smart tutor to provide feedback in programming courses [0.0]
We present an AI based intelligent tutor that answers students programming questions.
The tool has been tested by university students at the URJC along a whole course.
arXiv Detail & Related papers (2023-01-24T11:00:06Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - TecCoBot: Technology-aided support for self-regulated learning [52.77024349608834]
Self-study activities can increase the degree of activity and the contribution of self-study activities to the achievement of learning outcomes.
Especially in times of a global pandemic, self-study activities are increasingly executed at home, where students already use technology-enhanced materials, processes, and digital platforms.
arXiv Detail & Related papers (2021-11-23T13:50:21Z) - Using Machine Learning to Predict Engineering Technology Students'
Success with Computer Aided Design [50.591267188664666]
We show how data combined with machine learning techniques can predict how well a particular student will perform in a design task.
We found that our models using early design sequence actions are particularly valuable for prediction.
Further improvements to these models could lead to earlier predictions and thus provide students feedback sooner to enhance their learning.
arXiv Detail & Related papers (2021-08-12T20:24:54Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.