Machine Learning Meets The Herbrand Universe
- URL: http://arxiv.org/abs/2210.03590v1
- Date: Fri, 7 Oct 2022 14:46:32 GMT
- Title: Machine Learning Meets The Herbrand Universe
- Authors: Jelle Piepenbrock, Josef Urban, Konstantin Korovin, Miroslav
Ol\v{s}\'ak, Tom Heskes and Mikola\v{s} Janota
- Abstract summary: Herbrand's theorem allows reduction of first-order problems to propositional problems by instantiation.
We develop the first machine learning system targeting this task.
We show that the trained system achieves high accuracy in predicting the right instances.
- Score: 1.5984927623688914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The appearance of strong CDCL-based propositional (SAT) solvers has greatly
advanced several areas of automated reasoning (AR). One of the directions in AR
is thus to apply SAT solvers to expressive formalisms such as first-order
logic, for which large corpora of general mathematical problems exist today.
This is possible due to Herbrand's theorem, which allows reduction of
first-order problems to propositional problems by instantiation. The core
challenge is choosing the right instances from the typically infinite Herbrand
universe. In this work, we develop the first machine learning system targeting
this task, addressing its combinatorial and invariance properties. In
particular, we develop a GNN2RNN architecture based on an invariant graph
neural network (GNN) that learns from problems and their solutions
independently of symbol names (addressing the abundance of skolems), combined
with a recurrent neural network (RNN) that proposes for each clause its
instantiations. The architecture is then trained on a corpus of mathematical
problems and their instantiation-based proofs, and its performance is evaluated
in several ways. We show that the trained system achieves high accuracy in
predicting the right instances, and that it is capable of solving many problems
by educated guessing when combined with a ground solver. To our knowledge, this
is the first convincing use of machine learning in synthesizing relevant
elements from arbitrary Herbrand universes.
Related papers
- Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Graph Neural Networks for Propositional Model Counting [1.0152838128195467]
Graph Networks (GNNs) have been recently leveraged to solve logical reasoning tasks.
We present an architecture based on the GNN framework for belief propagation (BP) of Kuch et al., extended with self-attentive GNN and trained to approximately solve the #SAT problem.
arXiv Detail & Related papers (2022-05-09T17:03:13Z) - Graph Neural Networks are Dynamic Programmers [0.0]
Graph neural networks (GNNs) are claimed to align with dynamic programming (DP)
Here we show, using methods from theory and abstract algebra, that there exists an intricate connection between GNNs and DP.
arXiv Detail & Related papers (2022-03-29T13:27:28Z) - Can Graph Neural Networks Learn to Solve MaxSAT Problem? [22.528432249712637]
We study the capability of GNNs in learning to solve MaxSAT problem, both from theoretical and practical perspectives.
We build two kinds of GNN models to learn the solution of MaxSAT instances from benchmarks, and show that GNNs have attractive potential to solve MaxSAT problem through experimental evaluation.
We also present a theoretical explanation of the effect that GNNs can learn to solve MaxSAT problem to some extent for the first time, based on the algorithmic alignment theory.
arXiv Detail & Related papers (2021-11-15T07:33:33Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks [130.70449023574537]
Our NS-r consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers.
Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning.
arXiv Detail & Related papers (2021-07-03T13:14:58Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z) - SpatialSim: Recognizing Spatial Configurations of Objects with Graph
Neural Networks [31.695447265278126]
We show how a machine can learn and compare classes of geometric spatial configurations that are invariant to the point of view of an external observer.
We propose SpatialSim (Spatial Similarity), a novel geometrical reasoning benchmark, and argue that progress on this benchmark would pave the way towards a general solution.
Secondly, we study how inductive relational biases exhibited by fully-connected message-passing Graph Neural Networks (MPGNNs) are useful to solve those tasks.
arXiv Detail & Related papers (2020-04-09T14:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.