ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback
- URL: http://arxiv.org/abs/2107.14035v1
- Date: Fri, 23 Jul 2021 22:41:28 GMT
- Title: ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback
- Authors: Mike Wu, Noah Goodman, Chris Piech, Chelsea Finn
- Abstract summary: In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
- Score: 54.142719510638614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-quality computer science education is limited by the difficulty of
providing instructor feedback to students at scale. While this feedback could
in principle be automated, supervised approaches to predicting the correct
feedback are bottlenecked by the intractability of annotating large quantities
of student code. In this paper, we instead frame the problem of providing
feedback as few-shot classification, where a meta-learner adapts to give
feedback to student code on a new programming question from just a few examples
annotated by instructors. Because data for meta-training is limited, we propose
a number of amendments to the typical few-shot learning framework, including
task augmentation to create synthetic tasks, and additional side information to
build stronger priors about each task. These additions are combined with a
transformer architecture to embed discrete sequences (e.g. code) to a
prototypical representation of a feedback class label. On a suite of few-shot
natural language processing tasks, we match or outperform state-of-the-art
performance. Then, on a collection of student solutions to exam questions from
an introductory university course, we show that our approach reaches an average
precision of 88% on unseen questions, surpassing the 82% precision of teaching
assistants. Our approach was successfully deployed to deliver feedback to
16,000 student exam-solutions in a programming course offered by a tier 1
university. This is, to the best of our knowledge, the first successful
deployment of a machine learning based feedback to open-ended student code.
Related papers
- Adapting Vision-Language Models to Open Classes via Test-Time Prompt Tuning [50.26965628047682]
Adapting pre-trained models to open classes is a challenging problem in machine learning.
In this paper, we consider combining the advantages of both and come up with a test-time prompt tuning approach.
Our proposed method outperforms all comparison methods on average considering both base and new classes.
arXiv Detail & Related papers (2024-08-29T12:34:01Z) - CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning [101.81127587760831]
Current fine-tuning methods build adapters widely of the context of downstream task to learn, or the context of important knowledge to maintain.
We propose CorDA, a Context-oriented Decomposition Adaptation method that builds learnable task-aware adapters.
Our method enables two options, the knowledge-preserved adaptation and the instruction-previewed adaptation.
arXiv Detail & Related papers (2024-06-07T19:10:35Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Feedback and Engagement on an Introductory Programming Module [0.0]
We ran a study on engagement and achievement for a first year undergraduate programming module which used an online learning environment containing tasks which generate automated feedback.
We gathered quantitative data on engagement and achievement which allowed us to split the cohort into 6 groups.
We then ran interviews with students after the end of the module to produce qualitative data on perceptions of what feedback is, how useful it is, the uses made of it, and how it bears on engagement.
arXiv Detail & Related papers (2022-01-04T16:53:09Z) - Learning Online from Corrective Feedback: A Meta-Algorithm for Robotics [24.863665993509997]
A key challenge in Imitation Learning (IL) is that optimal state actions demonstrations are difficult for the teacher to provide.
As an alternative to state action demonstrations, the teacher can provide corrective feedback such as their preferences or rewards.
We show that our approach can learn quickly from a variety of noisy feedback.
arXiv Detail & Related papers (2021-04-02T12:42:12Z) - Deep Discourse Analysis for Generating Personalized Feedback in
Intelligent Tutor Systems [4.716555240531893]
We explore creating automated, personalized feedback in an intelligent tutoring system (ITS)
Our goal is to pinpoint correct and incorrect concepts in student answers in order to achieve better student learning gains.
arXiv Detail & Related papers (2021-03-13T20:33:10Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Effects of Human vs. Automatic Feedback on Students' Understanding of AI
Concepts and Programming Style [0.0]
The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses.
There is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback.
This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance.
arXiv Detail & Related papers (2020-11-20T21:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.