Learning to Guide a Saturation-Based Theorem Prover
- URL: http://arxiv.org/abs/2106.03906v1
- Date: Mon, 7 Jun 2021 18:35:57 GMT
- Title: Learning to Guide a Saturation-Based Theorem Prover
- Authors: Ibrahim Abdelaziz, Maxwell Crouse, Bassem Makni, Vernon Austil,
Cristina Cornelio, Shajith Ikbal, Pavan Kapanipathi, Ndivhuwo Makondo,
Kavitha Srinivas, Michael Witbrock, Achille Fokoue
- Abstract summary: TRAIL is a deep learning-based approach to theorem proving that characterizes core elements of saturation-based theorem proving within a neural framework.
To the best of our knowledge, TRAIL is the first reinforcement learning-based approach to exceed the performance of a state-of-the-art traditional theorem prover.
- Score: 9.228237801323042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional automated theorem provers have relied on manually tuned
heuristics to guide how they perform proof search. Recently, however, there has
been a surge of interest in the design of learning mechanisms that can be
integrated into theorem provers to improve their performance automatically. In
this work, we introduce TRAIL, a deep learning-based approach to theorem
proving that characterizes core elements of saturation-based theorem proving
within a neural framework. TRAIL leverages (a) an effective graph neural
network for representing logical formulas, (b) a novel neural representation of
the state of a saturation-based theorem prover in terms of processed clauses
and available actions, and (c) a novel representation of the inference
selection process as an attention-based action policy. We show through a
systematic analysis that these components allow TRAIL to significantly
outperform previous reinforcement learning-based theorem provers on two
standard benchmark datasets (up to 36% more theorems proved). In addition, to
the best of our knowledge, TRAIL is the first reinforcement learning-based
approach to exceed the performance of a state-of-the-art traditional theorem
prover on a standard theorem proving benchmark (solving up to 17% more
problems).
Related papers
- Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Towards a General Framework for Continual Learning with Pre-training [55.88910947643436]
We present a general framework for continual learning of sequentially arrived tasks with the use of pre-training.
We decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction.
We propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics.
arXiv Detail & Related papers (2023-10-21T02:03:38Z) - Neural Theorem Provers Delineating Search Area Using RNN [2.462063246087401]
A new RNNNTP method is proposed, using a generalized EM-based approach to continuously improve the computational efficiency of Neural Theorem Provers(NTPs)
The relation generator is trained effectively and interpretably, so that the whole model can be carried out according to the development of the training, and the computational efficiency is also greatly improved.
arXiv Detail & Related papers (2022-03-14T10:44:11Z) - Proving Theorems using Incremental Learning and Hindsight Experience
Replay [45.277067974919106]
We propose a general incremental learning algorithm for training domain specific provers for first-order logic without equality.
We adapt hindsight experience replay to theorem proving, so as to be able to learn even when no proof can be found.
We show that provers trained this way can match and sometimes surpass state-of-the-art traditional provers on the TPTP dataset.
arXiv Detail & Related papers (2021-12-20T16:40:26Z) - Graph Contrastive Pre-training for Effective Theorem Reasoning [6.721845345130468]
Existing methods show promising results on tactic prediction by learning a deep neural network based model from proofs written by human experts.
We propose NeuroTactic, a novel extension with a special focus on improving the representation learning for theorem proving.
arXiv Detail & Related papers (2021-08-24T16:14:54Z) - Training a First-Order Theorem Prover from Synthetic Data [50.23600875138756]
A major challenge in applying machine learning to automated theorem proving is the scarcity of training data.
We propose an approach that relies on training purely with synthetically generated theorems, without any human data aside from axioms.
Our neural prover outperforms the state-of-the-art E-prover on this synthetic data in both time and search steps.
arXiv Detail & Related papers (2021-03-05T17:01:34Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - TacticZero: Learning to Prove Theorems from Scratch with Deep
Reinforcement Learning [6.764610878007278]
We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning.
We formulate the process of ITP as a Markov decision process (MDP) in which each state represents a set of potential derivation paths.
We show that the framework provides comparable performance to that of the approaches that use human experts.
arXiv Detail & Related papers (2021-02-19T06:08:39Z) - Learning to Prove Theorems by Learning to Generate Theorems [71.46963489866596]
We learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.
Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover.
arXiv Detail & Related papers (2020-02-17T16:06:02Z) - An Experimental Study of Formula Embeddings for Automated Theorem
Proving in First-Order Logic [13.633012068936894]
We study and experimentally compare pattern-based embeddings that are applied in current systems with popular graph-based encodings.
Our experiments show that the advantages of simpler encoding schemes in terms of runtime are outdone by more complex graph-based embeddings.
arXiv Detail & Related papers (2020-02-02T16:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.