Graph Neural Networks for Propositional Model Counting
- URL: http://arxiv.org/abs/2205.04423v1
- Date: Mon, 9 May 2022 17:03:13 GMT
- Title: Graph Neural Networks for Propositional Model Counting
- Authors: Gaia Saveri and Luca Bortolussi
- Abstract summary: Graph Networks (GNNs) have been recently leveraged to solve logical reasoning tasks.
We present an architecture based on the GNN framework for belief propagation (BP) of Kuch et al., extended with self-attentive GNN and trained to approximately solve the #SAT problem.
- Score: 1.0152838128195467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have been recently leveraged to solve several
logical reasoning tasks. Nevertheless, counting problems such as propositional
model counting (#SAT) are still mostly approached with traditional solvers.
Here we tackle this gap by presenting an architecture based on the GNN
framework for belief propagation (BP) of Kuch et al., extended with
self-attentive GNN and trained to approximately solve the #SAT problem. We ran
a thorough experimental investigation, showing that our model, trained on a
small set of random Boolean formulae, is able to scale effectively to much
larger problem sizes, with comparable or better performances of state of the
art approximate solvers. Moreover, we show that it can be efficiently
fine-tuned to provide good generalization results on different formulae
distributions, such as those coming from SAT-encoded combinatorial problems.
Related papers
- A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Neural Network Branch-and-Bound for Neural Network Verification [26.609606492971967]
We propose a novel machine learning framework that can be used for designing an effective branching strategy.
We learn two graph neural networks (GNNs) that both directly treat the network we want to verify as a graph input.
We show that our GNN models generalize well to harder properties on larger unseen networks.
arXiv Detail & Related papers (2021-07-27T14:42:57Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Counting Substructures with Higher-Order Graph Neural Networks:
Possibility and Impossibility Results [58.277290855841976]
We study tradeoffs of computational cost and expressive power of Graph Neural Networks (GNNs)
We show that a new model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs.
In several cases, the proposed algorithm can greatly reduce computational complexity compared to the existing higher-order $k$-GNNs.
arXiv Detail & Related papers (2020-12-06T03:42:54Z) - Learning Branching Heuristics for Propositional Model Counting [27.343084935521752]
#SAT is the problem of computing the number of satisfying assignments of a Boolean formula.
Neuro# is an approach for learning branchings to improve the performance of exact #SAT solvers on instances from a given family of problems.
arXiv Detail & Related papers (2020-07-07T05:20:29Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.