Peano: Learning Formal Mathematical Reasoning
- URL: http://arxiv.org/abs/2211.15864v1
- Date: Tue, 29 Nov 2022 01:42:26 GMT
- Title: Peano: Learning Formal Mathematical Reasoning
- Authors: Gabriel Poesia and Noah D. Goodman
- Abstract summary: General mathematical reasoning is computationally undecidable, but humans routinely solve new problems.
We posit that central to both puzzles is the structure of procedural abstractions underlying mathematics.
We explore this idea in a case study on 5 sections of beginning algebra on the Khan Academy platform.
- Score: 35.086032962873226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: General mathematical reasoning is computationally undecidable, but humans
routinely solve new problems. Moreover, discoveries developed over centuries
are taught to subsequent generations quickly. What structure enables this, and
how might that inform automated mathematical reasoning? We posit that central
to both puzzles is the structure of procedural abstractions underlying
mathematics. We explore this idea in a case study on 5 sections of beginning
algebra on the Khan Academy platform. To define a computational foundation, we
introduce Peano, a theorem-proving environment where the set of valid actions
at any point is finite. We use Peano to formalize introductory algebra problems
and axioms, obtaining well-defined search problems. We observe existing
reinforcement learning methods for symbolic reasoning to be insufficient to
solve harder problems. Adding the ability to induce reusable abstractions
("tactics") from its own solutions allows an agent to make steady progress,
solving all problems. Furthermore, these abstractions induce an order to the
problems, seen at random during training. The recovered order has significant
agreement with the expert-designed Khan Academy curriculum, and
second-generation agents trained on the recovered curriculum learn
significantly faster. These results illustrate the synergistic role of
abstractions and curricula in the cultural transmission of mathematics.
Related papers
- LeanAgent: Lifelong Learning for Formal Theorem Proving [85.39415834798385]
We present LeanAgent, a novel lifelong learning framework for theorem proving.
LeanAgent continuously generalizes to and improves on ever-expanding mathematical knowledge.
It successfully proves 162 theorems previously unproved by humans across 23 diverse Lean repositories.
arXiv Detail & Related papers (2024-10-08T17:11:24Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Symbolic Equation Solving via Reinforcement Learning [9.361474110798143]
We propose a novel deep-learning interface involving a reinforcement-learning agent that operates a symbolic stack calculator.
By construction, this system is capable of exact transformations and immune to hallucination.
arXiv Detail & Related papers (2024-01-24T13:42:24Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned
Symbolic Abstractions [13.69691843677107]
Learning Mathematical Abstractions (LEMMA) is an algorithm that implements this idea for reinforcement learning agents in mathematical domains.
We evaluate LEMMA on two mathematical reasoning tasks--equation solving and fraction simplification--in a step-by-step fashion.
arXiv Detail & Related papers (2022-11-16T04:59:08Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - NaturalProofs: Mathematical Theorem Proving in Natural Language [132.99913141409968]
We develop NaturalProofs, a multi-domain corpus of mathematical statements and their proofs.
NaturalProofs unifies broad coverage, deep coverage, and low-resource mathematical sources.
We benchmark strong neural methods on mathematical reference retrieval and generation tasks.
arXiv Detail & Related papers (2021-03-24T03:14:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.