ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system
description)
- URL: http://arxiv.org/abs/2002.05406v2
- Date: Tue, 28 Apr 2020 13:59:26 GMT
- Title: ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system
description)
- Authors: Jan Jakub\r{u}v, Karel Chvalovsk\'y, Miroslav Ol\v{s}\'ak, Bartosz
Piotrowski, Martin Suda, Josef Urban
- Abstract summary: We describe an implementation of gradient boosting and neural guidance of saturation-style automated theorem provers.
For the gradient-boosting guidance, we manually create abstracted features by considering arity-based encodings of formulas.
For the neural guidance, we use symbol-independent graph neural networks (GNNs) and their embedding of the terms and clauses.
- Score: 0.4893345190925177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe an implementation of gradient boosting and neural guidance of
saturation-style automated theorem provers that does not depend on consistent
symbol names across problems. For the gradient-boosting guidance, we manually
create abstracted features by considering arity-based encodings of formulas.
For the neural guidance, we use symbol-independent graph neural networks (GNNs)
and their embedding of the terms and clauses. The two methods are efficiently
implemented in the E prover and its ENIGMA learning-guided framework.
To provide competitive real-time performance of the GNNs, we have developed a
new context-based approach to evaluation of generated clauses in E. Clauses are
evaluated jointly in larger batches and with respect to a large number of
already selected clauses (context) by the GNN that estimates their collectively
most useful subset in several rounds of message passing. This means that
approximative inference rounds done by the GNN are efficiently interleaved with
precise symbolic inference rounds done inside E. The methods are evaluated on
the MPTP large-theory benchmark and shown to achieve comparable real-time
performance to state-of-the-art symbol-based methods. The methods also show
high complementarity, solving a large number of hard Mizar problems.
Related papers
- Efficient Graph Similarity Computation with Alignment Regularization [7.143879014059894]
Graph similarity computation (GSC) is a learning-based prediction task using Graph Neural Networks (GNNs)
We show that high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg)
In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference.
arXiv Detail & Related papers (2024-06-21T07:37:28Z) - LOGIN: A Large Language Model Consulted Graph Neural Network Training Framework [30.54068909225463]
We aim to streamline the GNN design process and leverage the advantages of Large Language Models (LLMs) to improve the performance of GNNs on downstream tasks.
We formulate a new paradigm, coined "LLMs-as-Consultants," which integrates LLMs with GNNs in an interactive manner.
We empirically evaluate the effectiveness of LOGIN on node classification tasks across both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2024-05-22T18:17:20Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Optimal Inference in Contextual Stochastic Block Models [0.0]
The contextual block model (cSBM) was proposed for unsupervised community detection on attributed graphs.
The cSBM has been widely used as a synthetic dataset for evaluating the performance of graph-neural networks (GNNs) for semi-supervised node classification.
We show that there can be a considerable gap between the accuracy reached by this algorithm and the performance of the GNN architectures proposed in the literature.
arXiv Detail & Related papers (2023-06-06T10:02:57Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Graph Neural Networks for Contextual ASR with the Tree-Constrained
Pointer Generator [9.053645441056256]
This paper proposes an innovative method for achieving end-to-end contextual ASR using graph neural network (GNN) encodings.
GNN encodings facilitate lookahead for future word pieces in the process of ASR decoding at each tree node.
The performance of the systems was evaluated using the Librispeech and AMI corpus, following the visual-grounded contextual ASR pipeline.
arXiv Detail & Related papers (2023-05-30T08:20:58Z) - Neural Structured Prediction for Inductive Node Classification [29.908759584092167]
This paper studies node classification in the inductive setting, aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs.
We present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds.
arXiv Detail & Related papers (2022-04-15T15:50:27Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.