Effects of Human vs. Automatic Feedback on Students' Understanding of AI
Concepts and Programming Style
- URL: http://arxiv.org/abs/2011.10653v1
- Date: Fri, 20 Nov 2020 21:40:32 GMT
- Title: Effects of Human vs. Automatic Feedback on Students' Understanding of AI
Concepts and Programming Style
- Authors: Abe Leite and Sa\'ul A. Blanco
- Abstract summary: The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses.
There is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback.
This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of automatic grading tools has become nearly ubiquitous in large
undergraduate programming courses, and recent work has focused on improving the
quality of automatically generated feedback. However, there is a relative lack
of data directly comparing student outcomes when receiving computer-generated
feedback and human-written feedback. This paper addresses this gap by splitting
one 90-student class into two feedback groups and analyzing differences in the
two cohorts' performance. The class is an intro to AI with programming HW
assignments. One group of students received detailed computer-generated
feedback on their programming assignments describing which parts of the
algorithms' logic was missing; the other group additionally received
human-written feedback describing how their programs' syntax relates to issues
with their logic, and qualitative (style) recommendations for improving their
code. Results on quizzes and exam questions suggest that human feedback helps
students obtain a better conceptual understanding, but analyses found no
difference between the groups' ability to collaborate on the final project. The
course grade distribution revealed that students who received human-written
feedback performed better overall; this effect was the most pronounced in the
middle two quartiles of each group. These results suggest that feedback about
the syntax-logic relation may be a primary mechanism by which human feedback
improves student outcomes.
Related papers
- Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Identifying Student Profiles Within Online Judge Systems Using
Explainable Artificial Intelligence [6.638206014723678]
Online Judge (OJ) systems are typically considered within programming-related courses as they yield fast and objective assessments of the code developed by the students.
This work aims to tackle this limitation by considering the further exploitation of the information gathered by the OJ and automatically inferring feedback for both the student and the instructor.
arXiv Detail & Related papers (2024-01-29T12:11:30Z) - Students' Perceptions and Preferences of Generative Artificial
Intelligence Feedback for Programming [15.372316943507506]
We generated automated feedback using the ChatGPT API for four lab assignments in an introductory computer science class.
Students perceived the feedback as aligning well with formative feedback guidelines established by Shute.
Students generally expected specific and corrective feedback with sufficient code examples, but had diverged opinions on the tone of the feedback.
arXiv Detail & Related papers (2023-12-17T22:26:53Z) - Constructive Large Language Models Alignment with Diverse Feedback [76.9578950893839]
We introduce Constructive and Diverse Feedback (CDF) as a novel method to enhance large language models alignment.
We exploit critique feedback for easy problems, refinement feedback for medium problems, and preference feedback for hard problems.
By training our model with this diversified feedback, we achieve enhanced alignment performance while using less training data.
arXiv Detail & Related papers (2023-10-10T09:20:14Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Feedback and Engagement on an Introductory Programming Module [0.0]
We ran a study on engagement and achievement for a first year undergraduate programming module which used an online learning environment containing tasks which generate automated feedback.
We gathered quantitative data on engagement and achievement which allowed us to split the cohort into 6 groups.
We then ran interviews with students after the end of the module to produce qualitative data on perceptions of what feedback is, how useful it is, the uses made of it, and how it bears on engagement.
arXiv Detail & Related papers (2022-01-04T16:53:09Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Deep Discourse Analysis for Generating Personalized Feedback in
Intelligent Tutor Systems [4.716555240531893]
We explore creating automated, personalized feedback in an intelligent tutoring system (ITS)
Our goal is to pinpoint correct and incorrect concepts in student answers in order to achieve better student learning gains.
arXiv Detail & Related papers (2021-03-13T20:33:10Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.