Solving Linear Algebra by Program Synthesis
- URL: http://arxiv.org/abs/2111.08171v1
- Date: Tue, 16 Nov 2021 01:16:43 GMT
- Title: Solving Linear Algebra by Program Synthesis
- Authors: Iddo Drori and Nakul Verma
- Abstract summary: We solve MIT's Linear Algebra 18.06 course and Columbia University's Computational Linear Algebra COMS3251 courses with perfect accuracy by interactive program synthesis.
This surprisingly strong result is achieved by turning the course questions into programming tasks and then running the programs to produce the correct answers.
- Score: 1.0660480034605238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We solve MIT's Linear Algebra 18.06 course and Columbia University's
Computational Linear Algebra COMS3251 courses with perfect accuracy by
interactive program synthesis. This surprisingly strong result is achieved by
turning the course questions into programming tasks and then running the
programs to produce the correct answers. We use OpenAI Codex with zero-shot
learning, without providing any examples in the prompts, to synthesize code
from questions. We quantify the difference between the original question text
and the transformed question text that yields a correct answer. Since all
COMS3251 questions are not available online the model is not overfitting. We go
beyond just generating code for questions with numerical answers by
interactively generating code that also results visually pleasing plots as
output. Finally, we automatically generate new questions given a few sample
questions which may be used as new course content. This work is a significant
step forward in solving quantitative math problems and opens the door for
solving many university level STEM courses by machine.
Related papers
- Limits of an AI program for solving college math problems [0.0]
A neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level.
The system they describe is indeed impressive; however, the above description is very much overstated.
The work of solving the problems is done, not by a neural network, but by the symbolic algebra package Sympy.
arXiv Detail & Related papers (2022-08-14T20:10:14Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - From Human Days to Machine Seconds: Automatically Answering and
Generating Machine Learning Final Exams [10.25071232250652]
A final exam in machine learning at a top institution such as MIT, Harvard, or Cornell typically takes faculty days to write, and students hours to solve.
We demonstrate that large language models pass machine learning finals at a human level, on finals available online after the models were trained, and automatically generate new human-quality final exam questions in seconds.
arXiv Detail & Related papers (2022-06-11T06:38:06Z) - Write a Line: Tests with Answer Templates and String Completion Hints
for Self-Learning in a CS1 Course [0.0]
This paper reports the results of using regular-expression-based questions with string completion hints in a CS1 course for 4 years with 497 students.
The evaluation results show that Perl-compatible regular expressions provide good precision and recall (more than 99%) when used for questions requiring writing a single line of code.
arXiv Detail & Related papers (2022-04-19T17:53:35Z) - A Neural Network Solves and Generates Mathematics Problems by Program
Synthesis: Calculus, Differential Equations, Linear Algebra, and More [8.437319139670116]
We turn questions into programming tasks, automatically generate programs, and then execute them.
This is the first work to automatically solve, grade, and generate university-level Mathematics course questions at scale.
arXiv Detail & Related papers (2021-12-31T18:57:31Z) - CodeQA: A Question Answering Dataset for Source Code Comprehension [82.63394952538292]
Given a code snippet and a question, a textual answer is required to be generated.
CodeQA contains a Java dataset with 119,778 question-answer pairs and a Python dataset with 70,085 question-answer pairs.
arXiv Detail & Related papers (2021-09-17T06:06:38Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Solving Machine Learning Problems [0.315565869552558]
This work trains a machine learning model to solve machine learning problems from a University undergraduate level course.
We generate a new training set of questions and answers consisting of course exercises, homework, and quiz questions from MIT's 6.036 Introduction to Machine Learning course.
Our system demonstrates an overall accuracy of 96% for open-response questions and 97% for multiple-choice questions, compared with MIT students' average of 93%.
arXiv Detail & Related papers (2021-07-02T18:52:50Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Understanding Unnatural Questions Improves Reasoning over Text [54.235828149899625]
Complex question answering (CQA) over raw text is a challenging task.
Learning an effective CQA model requires large amounts of human-annotated data.
We address the challenge of learning a high-quality programmer (parser) by projecting natural human-generated questions into unnatural machine-generated questions.
arXiv Detail & Related papers (2020-10-19T10:22:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.