Learning to Prove Trigonometric Identities
- URL: http://arxiv.org/abs/2207.06679v1
- Date: Thu, 14 Jul 2022 06:16:17 GMT
- Title: Learning to Prove Trigonometric Identities
- Authors: Zhou Liu, Yujun Li, Zhengying Liu, Lin Li, Zhenguo Li
- Abstract summary: We construct an automatic proof system for trigonometric identities.
Our goal is not only to complete the proof, but to complete the proof in as few steps as possible.
After further improvement through reinforcement learning, we get AutoTrig, which can give proof steps for identities in almost as short steps as BFS.
- Score: 36.56548303496931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic theorem proving with deep learning methods has attracted attentions
recently. In this paper, we construct an automatic proof system for
trigonometric identities. We define the normalized form of trigonometric
identities, design a set of rules for the proof and put forward a method which
can generate theoretically infinite trigonometric identities. Our goal is not
only to complete the proof, but to complete the proof in as few steps as
possible. For this reason, we design a model to learn proof data generated by
random BFS (rBFS), and it is proved theoretically and experimentally that the
model can outperform rBFS after a simple imitation learning. After further
improvement through reinforcement learning, we get AutoTrig, which can give
proof steps for identities in almost as short steps as BFS (theoretically
shortest method), with a time cost of only one-thousandth. In addition,
AutoTrig also beats Sympy, Matlab and human in the synthetic dataset, and
performs well in many generalization tasks.
Related papers
- A Combinatorial Identities Benchmark for Theorem Proving via Automated Theorem Generation [3.003569769097376]
Combinatorics is a cornerstone of mathematics, providing essential tools for analyzing discrete structures and solving problems.
To address this, we manually construct LeanComb, an automated identities benchmark in Lean.
We develop an Automated Theorem Generator for Combinatorial Identities, ATG4CI, which combines candidate tactics suggested by a self-improving large language model with a Reinforcement Learning Tree Search approach for prediction.
arXiv Detail & Related papers (2025-02-25T04:41:49Z) - Cobblestone: Iterative Automation for Formal Verification [11.445689801392657]
Formal verification using proof assistants, such as Coq, is an effective way of improving software quality, but it is expensive.
Recent research has used machine learning to automatically synthesize proofs, reducing verification effort, but these tools are able to prove only a fraction of the desired software properties.
We introduce Cobblestone, a new proof-synthesis approach that improves on the state of the art by taking advantage of partial progress in proof synthesis attempts.
arXiv Detail & Related papers (2024-10-25T19:25:00Z) - Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Learning Formal Mathematics From Intrinsic Motivation [34.986025832497255]
Minimo is an agent that learns to pose problems for itself (conjecturing) and solve them (theorem proving)
We combine methods for constrained decoding and type-directed synthesis to sample valid conjectures from a language model.
Our agent targets generating hard but provable conjectures - a moving target, since its own theorem proving ability also improves as it trains.
arXiv Detail & Related papers (2024-06-30T13:34:54Z) - Learn from Failure: Fine-Tuning LLMs with Trial-and-Error Data for Intuitionistic Propositional Logic Proving [41.23045212775232]
We demonstrate the benefit of training models that additionally learn from failed search paths.
Facing the lack of such trial-and-error data in existing open-source theorem-proving datasets, we curate a dataset on intuitionistic propositional logic theorems.
We compare our model trained on relatively short trial-and-error information (TrialMaster) with models trained only on the correct paths and discover that the former solves more unseen theorems with lower trial searches.
arXiv Detail & Related papers (2024-04-10T23:01:45Z) - MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data [85.50740598523818]
MUSTARD is a framework that masters uniform synthesis of theorem and proof data of high quality and diversity.
We present a theorem-and-proof benchmark MUSTARDSAUCE with 5,866 valid data points.
We perform extensive analysis and demonstrate that MUSTARD generates validated high-quality step-by-step data.
arXiv Detail & Related papers (2024-02-14T05:57:58Z) - TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative
Language Models [68.65075559137608]
We propose TRIGO, an ATP benchmark that not only requires a model to reduce a trigonometric expression with step-by-step proofs but also evaluates a generative LM's reasoning ability on formulas.
We gather trigonometric expressions and their reduced forms from the web, annotate the simplification process manually, and translate it into the Lean formal language system.
We develop an automatic generator based on Lean-Gym to create dataset splits of varying difficulties and distributions in order to thoroughly analyze the model's generalization ability.
arXiv Detail & Related papers (2023-10-16T08:42:39Z) - Generating Natural Language Proofs with Verifier-Guided Search [74.9614610172561]
We present a novel stepwise method NLProofS (Natural Language Proof Search)
NLProofS learns to generate relevant steps conditioning on the hypothesis.
It achieves state-of-the-art performance on EntailmentBank and RuleTaker.
arXiv Detail & Related papers (2022-05-25T02:22:30Z) - Learning to Prove Theorems by Learning to Generate Theorems [71.46963489866596]
We learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.
Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover.
arXiv Detail & Related papers (2020-02-17T16:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.