AI Teaches the Art of Elegant Coding: Timely, Fair, and Helpful Style Feedback in a Global Course
- URL: http://arxiv.org/abs/2403.14986v1
- Date: Fri, 22 Mar 2024 06:45:39 GMT
- Title: AI Teaches the Art of Elegant Coding: Timely, Fair, and Helpful Style Feedback in a Global Course
- Authors: Juliette Woodrow, Ali Malik, Chris Piech,
- Abstract summary: We present our experience deploying a novel, real-time style feedback tool in Code in Place, a large-scale online CS1 course.
We show that students who received style feedback in real-time were five times more likely to view and engage with their feedback compared to students who received delayed feedback.
Those who viewed feedback were more likely to make significant style-related edits to their code, with over 79% of these edits directly incorporating their feedback.
- Score: 8.176398354378088
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Teaching students how to write code that is elegant, reusable, and comprehensible is a fundamental part of CS1 education. However, providing this "style feedback" in a timely manner has proven difficult to scale. In this paper, we present our experience deploying a novel, real-time style feedback tool in Code in Place, a large-scale online CS1 course. Our tool is based on the latest breakthroughs in large-language models (LLMs) and was carefully designed to be safe and helpful for students. We used our Real-Time Style Feedback tool (RTSF) in a class with over 8,000 diverse students from across the globe and ran a randomized control trial to understand its benefits. We show that students who received style feedback in real-time were five times more likely to view and engage with their feedback compared to students who received delayed feedback. Moreover, those who viewed feedback were more likely to make significant style-related edits to their code, with over 79% of these edits directly incorporating their feedback. We also discuss the practicality and dangers of LLM-based tools for feedback, investigating the quality of the feedback generated, LLM limitations, and techniques for consistency, standardization, and safeguarding against demographic bias, all of which are crucial for a tool utilized by students.
Related papers
- Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course [49.296957552006226]
Using large language models (LLMs) for automatic evaluation has become an important evaluation method in NLP research.
This report shares how we use GPT-4 as an automatic assignment evaluator in a university course with 1,028 students.
arXiv Detail & Related papers (2024-07-07T00:17:24Z) - Generating Feedback-Ladders for Logical Errors in Programming using Large Language Models [2.1485350418225244]
Large language model (LLM)-based methods have shown great promise in feedback generation for programming assignments.
This paper explores using LLMs to generate a "feedback-ladder", i.e., multiple levels of feedback for the same problem-submission pair.
We evaluate the quality of the generated feedback-ladder via a user study with students, educators, and researchers.
arXiv Detail & Related papers (2024-05-01T03:52:39Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Students' Perceptions and Preferences of Generative Artificial
Intelligence Feedback for Programming [15.372316943507506]
We generated automated feedback using the ChatGPT API for four lab assignments in an introductory computer science class.
Students perceived the feedback as aligning well with formative feedback guidelines established by Shute.
Students generally expected specific and corrective feedback with sufficient code examples, but had diverged opinions on the tone of the feedback.
arXiv Detail & Related papers (2023-12-17T22:26:53Z) - Constructive Large Language Models Alignment with Diverse Feedback [76.9578950893839]
We introduce Constructive and Diverse Feedback (CDF) as a novel method to enhance large language models alignment.
We exploit critique feedback for easy problems, refinement feedback for medium problems, and preference feedback for hard problems.
By training our model with this diversified feedback, we achieve enhanced alignment performance while using less training data.
arXiv Detail & Related papers (2023-10-10T09:20:14Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - A large language model-assisted education tool to provide feedback on
open-ended responses [2.624902795082451]
We present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions.
Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement.
arXiv Detail & Related papers (2023-07-25T19:49:55Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.