Learning Formal Mathematics From Intrinsic Motivation
- URL: http://arxiv.org/abs/2407.00695v2
- Date: Tue, 05 Nov 2024 01:40:45 GMT
- Title: Learning Formal Mathematics From Intrinsic Motivation
- Authors: Gabriel Poesia, David Broman, Nick Haber, Noah D. Goodman,
- Abstract summary: Minimo is an agent that learns to pose problems for itself (conjecturing) and solve them (theorem proving)
We combine methods for constrained decoding and type-directed synthesis to sample valid conjectures from a language model.
Our agent targets generating hard but provable conjectures - a moving target, since its own theorem proving ability also improves as it trains.
- Score: 34.986025832497255
- License:
- Abstract: How did humanity coax mathematics from the aether? We explore the Platonic view that mathematics can be discovered from its axioms - a game of conjecture and proof. We describe Minimo (Mathematics from Intrinsic Motivation): an agent that jointly learns to pose challenging problems for itself (conjecturing) and solve them (theorem proving). Given a mathematical domain axiomatized in dependent type theory, we first combine methods for constrained decoding and type-directed synthesis to sample valid conjectures from a language model. Our method guarantees well-formed conjectures by construction, even as we start with a randomly initialized model. We use the same model to represent a policy and value function for guiding proof search. Our agent targets generating hard but provable conjectures - a moving target, since its own theorem proving ability also improves as it trains. We propose novel methods for hindsight relabeling on proof search trees to significantly improve the agent's sample efficiency in both tasks. Experiments on 3 axiomatic domains (propositional logic, arithmetic and group theory) demonstrate that our agent can bootstrap from only the axioms, self-improving in generating true and challenging conjectures and in finding proofs.
Related papers
- ATG: Benchmarking Automated Theorem Generation for Generative Language Models [83.93978859348313]
Humans can develop new theorems to explore broader and more complex mathematical results.
Current generative language models (LMs) have achieved significant improvement in automatically proving theorems.
This paper proposes an Automated Theorem Generation benchmark that evaluates whether an agent can automatically generate valuable (and possibly brand new) theorems.
arXiv Detail & Related papers (2024-05-05T02:06:37Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Mathematical conjecture generation using machine intelligence [0.0]
We focus on strict inequalities of type f g and associate them with a vector space.
We develop a structural understanding of this conjecture space by studying linear automorphisms of this manifold.
We propose an algorithmic pipeline to generate novel conjectures using geometric gradient descent.
arXiv Detail & Related papers (2023-06-12T17:58:38Z) - Peano: Learning Formal Mathematical Reasoning [35.086032962873226]
General mathematical reasoning is computationally undecidable, but humans routinely solve new problems.
We posit that central to both puzzles is the structure of procedural abstractions underlying mathematics.
We explore this idea in a case study on 5 sections of beginning algebra on the Khan Academy platform.
arXiv Detail & Related papers (2022-11-29T01:42:26Z) - Towards a Mathematics Formalisation Assistant using Large Language
Models [5.485439959027125]
We explore the abilities of a large language model (Codex) to help with formalisation in the Lean theorem prover.
Codex is able to formalise short mathematical statements at undergrad level with nearly 75% accuracy for $120$ theorem statements.
We show that with a new prompting strategy Codex can formalise these proofs in natural language with at least one out of twelve Codex completion being easy to repair into a complete proof.
arXiv Detail & Related papers (2022-11-14T16:52:32Z) - NaturalProver: Grounded Mathematical Proof Generation with Language
Models [84.2064569475095]
Theorem proving in natural mathematical language plays a central role in mathematical advances and education.
We develop NaturalProver, a language model that generates proofs by conditioning on background references.
NaturalProver is capable of proving some theorems that require short (2-6 step) proofs, and providing next-step suggestions that are rated as correct and useful over 40% of the time.
arXiv Detail & Related papers (2022-05-25T17:01:18Z) - Noisy Deductive Reasoning: How Humans Construct Math, and How Math
Constructs Universes [0.5874142059884521]
We present a computational model of mathematical reasoning according to which mathematics is a fundamentally process.
We show that this framework gives a compelling account of several aspects of mathematical practice.
arXiv Detail & Related papers (2020-10-28T19:43:14Z) - Generative Language Modeling for Automated Theorem Proving [94.01137612934842]
This work is motivated by the possibility that a major limitation of automated theorem provers compared to humans might be addressable via generation from language models.
We present an automated prover and proof assistant, GPT-f, for the Metamath formalization language, and analyze its performance.
arXiv Detail & Related papers (2020-09-07T19:50:10Z) - Epistemic Phase Transitions in Mathematical Proofs [0.0]
We show that under a cognitively-plausible belief formation mechanism, belief in mathematical arguments can undergo a dramatic and rapidly-propagating jump from uncertainty to near-complete confidence at reasonable levels of claim-to-claim error rates.
Our results bear both on recent work in the history and philosophy of mathematics on how we understand proofs, and on a question, basic to cognitive science, of how we justify complex beliefs.
arXiv Detail & Related papers (2020-03-31T18:39:56Z) - Learning to Prove Theorems by Learning to Generate Theorems [71.46963489866596]
We learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.
Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover.
arXiv Detail & Related papers (2020-02-17T16:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.