From Human Days to Machine Seconds: Automatically Answering and
Generating Machine Learning Final Exams
- URL: http://arxiv.org/abs/2206.05442v7
- Date: Wed, 28 Jun 2023 04:42:05 GMT
- Title: From Human Days to Machine Seconds: Automatically Answering and
Generating Machine Learning Final Exams
- Authors: Iddo Drori, Sarah J. Zhang, Reece Shuttleworth, Sarah Zhang, Keith
Tyser, Zad Chin, Pedro Lantigua, Saisamrit Surbehera, Gregory Hunter, Derek
Austin, Leonard Tang, Yann Hicke, Sage Simhon, Sathwik Karnik, Darnell
Granberry, Madeleine Udell
- Abstract summary: A final exam in machine learning at a top institution such as MIT, Harvard, or Cornell typically takes faculty days to write, and students hours to solve.
We demonstrate that large language models pass machine learning finals at a human level, on finals available online after the models were trained, and automatically generate new human-quality final exam questions in seconds.
- Score: 10.25071232250652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A final exam in machine learning at a top institution such as MIT, Harvard,
or Cornell typically takes faculty days to write, and students hours to solve.
We demonstrate that large language models pass machine learning finals at a
human level, on finals available online after the models were trained, and
automatically generate new human-quality final exam questions in seconds.
Previous work has developed program synthesis and few-shot learning methods to
solve university-level problem set questions in mathematics and STEM courses.
In this work, we develop and compare methods that solve final exams, which
differ from problem sets in several ways: the questions are longer, have
multiple parts, are more complicated, and span a broader set of topics. We
curate a dataset and benchmark of questions from machine learning final exams
available online and code for answering these questions and generating new
questions. We show how to generate new questions from other questions and
course notes. For reproducibility and future research on this final exam
benchmark, we use automatic checkers for multiple-choice, numeric, and
questions with expression answers. We perform ablation studies comparing
zero-shot learning with few-shot learning and chain-of-thought prompting using
GPT-3, OPT, Codex, and ChatGPT across machine learning topics and find that
few-shot learning methods perform best. We highlight the transformative
potential of language models to streamline the writing and solution of
large-scale assessments, significantly reducing the workload from human days to
mere machine seconds. Our results suggest that rather than banning large
language models such as ChatGPT in class, instructors should teach students to
harness them by asking students meta-questions about correctness, completeness,
and originality of the responses generated, encouraging critical thinking in
academic studies.
Related papers
- Learning to Reuse Distractors to support Multiple Choice Question
Generation in Education [19.408786425460498]
This paper studies how a large existing set of manually created answers and distractors can be leveraged to help teachers in creating new multiple choice questions (MCQs)
We built several data-driven models based on context-aware question and distractor representations, and compared them with static feature-based models.
Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach.
arXiv Detail & Related papers (2022-10-25T12:48:56Z) - Learn to Explain: Multimodal Reasoning via Thought Chains for Science
Question Answering [124.16250115608604]
We present Science Question Answering (SQA), a new benchmark that consists of 21k multimodal multiple choice questions with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations.
We show that SQA improves the question answering performance by 1.20% in few-shot GPT-3 and 3.99% in fine-tuned UnifiedQA.
Our analysis further shows that language models, similar to humans, benefit from explanations to learn from fewer data and achieve the same performance with just 40% of the data.
arXiv Detail & Related papers (2022-09-20T07:04:24Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - Automatic Short Math Answer Grading via In-context Meta-learning [2.0263791972068628]
We study the problem of automatic short answer grading for students' responses to math questions.
We use MathBERT, a variant of the popular language model BERT adapted to mathematical content, as our base model.
Second, we use an in-context learning approach that provides scoring examples as input to the language model.
arXiv Detail & Related papers (2022-05-30T16:26:02Z) - Solving Linear Algebra by Program Synthesis [1.0660480034605238]
We solve MIT's Linear Algebra 18.06 course and Columbia University's Computational Linear Algebra COMS3251 courses with perfect accuracy by interactive program synthesis.
This surprisingly strong result is achieved by turning the course questions into programming tasks and then running the programs to produce the correct answers.
arXiv Detail & Related papers (2021-11-16T01:16:43Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Solving Machine Learning Problems [0.315565869552558]
This work trains a machine learning model to solve machine learning problems from a University undergraduate level course.
We generate a new training set of questions and answers consisting of course exercises, homework, and quiz questions from MIT's 6.036 Introduction to Machine Learning course.
Our system demonstrates an overall accuracy of 96% for open-response questions and 97% for multiple-choice questions, compared with MIT students' average of 93%.
arXiv Detail & Related papers (2021-07-02T18:52:50Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.