EvoGPT-f: An Evolutionary GPT Framework for Benchmarking Formal Math
Languages
- URL: http://arxiv.org/abs/2402.16878v1
- Date: Mon, 12 Feb 2024 19:10:11 GMT
- Title: EvoGPT-f: An Evolutionary GPT Framework for Benchmarking Formal Math
Languages
- Authors: Johnathan Mercer
- Abstract summary: Formal mathematics is the discipline of translating mathematics into a programming language.
This paper introduces an evolutionary framework for the first systematic quantitative analysis of the differential machine learnability of five formal math corpora.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Formal mathematics is the discipline of translating mathematics into a
programming language in which any statement can be unequivocally checked by a
computer. Mathematicians and computer scientists have spent decades of
painstaking formalization efforts developing languages such as Coq, HOL, and
Lean. Machine learning research has converged on these formal math corpora and
given rise to an assortment of methodologies to aid in interactive and
automated theorem proving. However, these papers have primarily focused on one
method, for one proof task, in one language. This paper introduces EvoGPT-f: a
novel evolutionary framework for the first systematic quantitative analysis of
the differential machine learnability of five formal math corpora (Lean 3, Lean
4, Coq, HOL 4, HOL Light) using four tokenization methods (character,
word-level, Byte Pair Encoding and StarCoder tokenizer). This paper does not
put to rest the question of the "best" or "easiest" language to learn. Rather,
this framework and preliminary findings begin to illuminate the differential
machine learnability of these languages, offering a foundation to forge more
systematic quantitative and qualitative comparative research across
communities.
Related papers
- LeanAgent: Lifelong Learning for Formal Theorem Proving [85.39415834798385]
We present LeanAgent, a novel lifelong learning framework for theorem proving.
LeanAgent continuously generalizes to and improves on ever-expanding mathematical knowledge.
It successfully proves 162 theorems previously unproved by humans across 23 diverse Lean repositories.
arXiv Detail & Related papers (2024-10-08T17:11:24Z) - Mathematical Formalized Problem Solving and Theorem Proving in Different Fields in Lean 4 [0.0]
Using computerized formal languages like Lean 4 to prove mathematical theorems has a significant impact on mathematical formalization.
This paper introduces the basic structure and tactics in general, determine how AI can assist the mathematical formalization process to improve its performance.
arXiv Detail & Related papers (2024-09-09T18:21:28Z) - Lean Workbook: A large-scale Lean problem set formalized from natural language math problems [50.22847430754973]
Large language models are not good at math theorem proving using formal languages like Lean.
A significant challenge in this area is the scarcity of training data available in these formal languages.
We propose a novel pipeline that iteratively generates and filters synthetic data to translate natural language mathematical problems into Lean 4 statements.
arXiv Detail & Related papers (2024-06-06T08:25:43Z) - math-PVS: A Large Language Model Framework to Map Scientific
Publications to PVS Theories [10.416375584563728]
This work investigates the applicability of large language models (LLMs) in formalizing advanced mathematical concepts.
We envision an automated process, called emphmath-PVS, to extract and formalize mathematical theorems from research papers.
arXiv Detail & Related papers (2023-10-25T23:54:04Z) - A New Approach Towards Autoformalization [7.275550401145199]
Autoformalization is the task of translating natural language mathematics into a formal language that can be verified by a program.
Research paper mathematics requires large amounts of background and context.
We propose an avenue towards tackling autoformalization for research-level mathematics, by breaking the task into easier and more approachable subtasks.
arXiv Detail & Related papers (2023-10-12T00:50:24Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Lila: A Unified Benchmark for Mathematical Reasoning [59.97570380432861]
LILA is a unified mathematical reasoning benchmark consisting of 23 diverse tasks along four dimensions.
We construct our benchmark by extending 20 datasets benchmark by collecting task instructions and solutions in the form of Python programs.
We introduce BHASKARA, a general-purpose mathematical reasoning model trained on LILA.
arXiv Detail & Related papers (2022-10-31T17:41:26Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - Learning to Match Mathematical Statements with Proofs [37.38969121408295]
The task is designed to improve the processing of research-level mathematical texts.
We release a dataset for the task, consisting of over 180k statement-proof pairs.
We show that considering the assignment problem globally and using weighted bipartite matching algorithms helps a lot in tackling the task.
arXiv Detail & Related papers (2021-02-03T15:38:54Z) - Generative Language Modeling for Automated Theorem Proving [94.01137612934842]
This work is motivated by the possibility that a major limitation of automated theorem provers compared to humans might be addressable via generation from language models.
We present an automated prover and proof assistant, GPT-f, for the Metamath formalization language, and analyze its performance.
arXiv Detail & Related papers (2020-09-07T19:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.