An Experimental Study of Formula Embeddings for Automated Theorem
Proving in First-Order Logic
- URL: http://arxiv.org/abs/2002.00423v2
- Date: Sun, 15 Mar 2020 16:41:53 GMT
- Title: An Experimental Study of Formula Embeddings for Automated Theorem
Proving in First-Order Logic
- Authors: Ibrahim Abdelaziz, Veronika Thost, Maxwell Crouse, Achille Fokoue
- Abstract summary: We study and experimentally compare pattern-based embeddings that are applied in current systems with popular graph-based encodings.
Our experiments show that the advantages of simpler encoding schemes in terms of runtime are outdone by more complex graph-based embeddings.
- Score: 13.633012068936894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated theorem proving in first-order logic is an active research area
which is successfully supported by machine learning. While there have been
various proposals for encoding logical formulas into numerical vectors -- from
simple strings to more involved graph-based embeddings -- little is known about
how these different encodings compare. In this paper, we study and
experimentally compare pattern-based embeddings that are applied in current
systems with popular graph-based encodings, most of which have not been
considered in the theorem proving context before. Our experiments show that the
advantages of simpler encoding schemes in terms of runtime are outdone by more
complex graph-based embeddings, which yield more efficient search strategies
and simpler proofs. To support this, we present a detailed analysis across
several dimensions of theorem prover performance beyond just proof completion
rate, thus providing empirical evidence to help guide future research on
neural-guided theorem proving towards the most promising directions.
Related papers
- Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Towards Efficient Fine-tuning of Pre-trained Code Models: An
Experimental Study and Beyond [52.656743602538825]
Fine-tuning pre-trained code models incurs a large computational cost.
We conduct an experimental study to explore what happens to layer-wise pre-trained representations and their encoded code knowledge during fine-tuning.
We propose Telly to efficiently fine-tune pre-trained code models via layer freezing.
arXiv Detail & Related papers (2023-04-11T13:34:13Z) - Fast Rule-Based Decoding: Revisiting Syntactic Rules in Neural
Constituency Parsing [9.858565876426411]
Previous research has demonstrated that probabilistic statistical methods based on syntactic rules are particularly effective in constituency parsing.
In this paper, we first implement a fast CKY decoding procedure harnessing GPU acceleration, based on which we further derive a syntactic rule-based (rule-constrained) CKY decoding.
arXiv Detail & Related papers (2022-12-16T13:07:09Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - Proving Theorems using Incremental Learning and Hindsight Experience
Replay [45.277067974919106]
We propose a general incremental learning algorithm for training domain specific provers for first-order logic without equality.
We adapt hindsight experience replay to theorem proving, so as to be able to learn even when no proof can be found.
We show that provers trained this way can match and sometimes surpass state-of-the-art traditional provers on the TPTP dataset.
arXiv Detail & Related papers (2021-12-20T16:40:26Z) - Learning to Guide a Saturation-Based Theorem Prover [9.228237801323042]
TRAIL is a deep learning-based approach to theorem proving that characterizes core elements of saturation-based theorem proving within a neural framework.
To the best of our knowledge, TRAIL is the first reinforcement learning-based approach to exceed the performance of a state-of-the-art traditional theorem prover.
arXiv Detail & Related papers (2021-06-07T18:35:57Z) - Neural Proof Nets [0.8379286663107844]
We propose a neural variant of proof nets based on Sinkhorn networks, which allows us to translate parsing as the problem of extracting primitive primitive permuting them into alignment.
We test our approach on AEThel, where it manages to correctly transcribe raw text sentences into proofs and terms of the linear lambda-calculus with an accuracy of as high as 70%.
arXiv Detail & Related papers (2020-09-26T22:48:47Z) - Learning Reasoning Strategies in End-to-End Differentiable Proving [50.9791149533921]
Conditional Theorem Provers learn optimal rule selection strategy via gradient-based optimisation.
We show that Conditional Theorem Provers are scalable and yield state-of-the-art results on the CLUTRR dataset.
arXiv Detail & Related papers (2020-07-13T16:22:14Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Option Discovery in the Absence of Rewards with Manifold Analysis [16.260113196512865]
We derive an algorithm that systematically discovers options without access to a specific reward or task assignment.
As opposed to the common practice used in previous methods, our algorithm makes full use of the spectrum of the graph Laplacian.
arXiv Detail & Related papers (2020-03-12T16:14:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.