Efficiency of Learning from Proof Blocks Versus Writing Proofs
- URL: http://arxiv.org/abs/2211.09609v2
- Date: Fri, 16 Dec 2022 21:27:35 GMT
- Title: Efficiency of Learning from Proof Blocks Versus Writing Proofs
- Authors: Seth Poulsen, Yael Gertner, Benjamin Cosman, Matthew West, Geoffrey L.
Herman
- Abstract summary: We describe a randomized controlled trial designed to measure the learning gains of using Proof Blocks for students learning proof by induction.
Students in the early phases of learning about proof by induction are able to learn just as much from reading lecture notes and using Proof Blocks as by reading lecture notes and writing proofs from scratch.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Proof Blocks is a software tool that provides students with a scaffolded
proof-writing experience, allowing them to drag and drop prewritten proof lines
into the correct order instead of starting from scratch. In this paper we
describe a randomized controlled trial designed to measure the learning gains
of using Proof Blocks for students learning proof by induction. The study
participants were 332 students recruited after completing the first month of
their discrete mathematics course. Students in the study took a pretest and
read lecture notes on proof by induction, completed a brief (less than 1 hour)
learning activity, and then returned one week later to complete the posttest.
Depending on the experimental condition that each student was assigned to, they
either completed only Proof Blocks problems, completed some Proof Blocks
problems and some written proofs, or completed only written proofs for their
learning activity. We find that students in the early phases of learning about
proof by induction are able to learn just as much from reading lecture notes
and using Proof Blocks as by reading lecture notes and writing proofs from
scratch, but in far less time on task. This finding complements previous
findings that Proof Blocks are useful exam questions and are viewed positively
by students.
Related papers
- Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - LEGO-Prover: Neural Theorem Proving with Growing Libraries [86.1191481712352]
We present LEGO-Prover, which employs a growing skill library containing verified lemmas as skills to augment the capability of LLMs used in theorem proving.
By constructing the proof modularly, LEGO-Prover enables LLMs to utilize existing skills retrieved from the library and to create new skills during the proving process.
Our ablation study indicates that these newly added skills are indeed helpful for proving theorems, resulting in an improvement from a success rate of 47.1% to 50.4%.
arXiv Detail & Related papers (2023-10-01T12:47:59Z) - ProofBuddy: A Proof Assistant for Learning and Monitoring [0.0]
Proof competence, i.e. the ability to write and check (mathematical) proofs, is an important skill in Computer Science.
The main issues are the correct use of formal language and the ascertainment of whether proofs, especially the students' own, are complete and correct.
Many authors have suggested using proof assistants to assist in teaching proof competence, but the efficacy of the approach is unclear.
We have performed a preliminary usability study of ProofBuddy at the Technical University of Denmark.
arXiv Detail & Related papers (2023-08-14T07:08:55Z) - LeanDojo: Theorem Proving with Retrieval-Augmented Language Models [72.54339382005732]
Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean.
Existing methods are difficult to reproduce or build on, due to private code, data, and compute requirements.
This paper introduces LeanDojo: an open-source Lean toolkit consisting of toolkits, data, models.
We develop ReProver: an LLM-based prover augmented with retrieval for selecting premises from a vast math library.
arXiv Detail & Related papers (2023-06-27T17:05:32Z) - NaturalProver: Grounded Mathematical Proof Generation with Language
Models [84.2064569475095]
Theorem proving in natural mathematical language plays a central role in mathematical advances and education.
We develop NaturalProver, a language model that generates proofs by conditioning on background references.
NaturalProver is capable of proving some theorems that require short (2-6 step) proofs, and providing next-step suggestions that are rated as correct and useful over 40% of the time.
arXiv Detail & Related papers (2022-05-25T17:01:18Z) - Generating Natural Language Proofs with Verifier-Guided Search [74.9614610172561]
We present a novel stepwise method NLProofS (Natural Language Proof Search)
NLProofS learns to generate relevant steps conditioning on the hypothesis.
It achieves state-of-the-art performance on EntailmentBank and RuleTaker.
arXiv Detail & Related papers (2022-05-25T02:22:30Z) - Efficient Feedback and Partial Credit Grading for Proof Blocks Problems [0.0]
We propose an algorithm for the edit distance problem that significantly outperforms the baseline procedure of exhaustively enumerating over the entire search space.
We benchmark our algorithm on thousands of student submissions from multiple courses, showing that the baseline algorithm is intractable.
Our new algorithm has also been used for problems in many other domains where the solution space can be modeled as a DAG.
arXiv Detail & Related papers (2022-04-08T17:44:59Z) - Natural Language Proof Checking in Introduction to Proof Classes --
First Experiences with Diproche [0.0]
We present and analyze the employment of the Diproche system in a one-semester mathematics beginners lecture.
The system is used to check the students' solution attempts to proving exercises in Boolean set theory and elementary number theory.
arXiv Detail & Related papers (2022-02-08T00:07:57Z) - Proof Blocks: Autogradable Scaffolding Activities for Learning to Write
Proofs [0.4588028371034407]
Proof Blocks is a software tool which enables students to write proofs by dragging and dropping prewritten proof lines into the correct order.
These proofs can be graded completely automatically, enabling students to receive rapid feedback on how they are doing with their proofs.
When constructing a problem, the instructor specifies the dependency graph of the lines of the proof, so that any correct arrangement of the lines can receive full credit.
arXiv Detail & Related papers (2021-06-07T17:03:58Z) - Comprehensible Counterfactual Explanation on Kolmogorov-Smirnov Test [56.5373227424117]
We tackle the problem of producing counterfactual explanations for test data failing the Kolmogorov-Smirnov (KS) test.
We develop an efficient algorithm MOCHE that avoids enumerating and checking an exponential number of subsets of the test set failing the KS test.
arXiv Detail & Related papers (2020-11-01T06:46:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.