Can Graph Neural Networks Learn to Solve MaxSAT Problem?
- URL: http://arxiv.org/abs/2111.07568v1
- Date: Mon, 15 Nov 2021 07:33:33 GMT
- Title: Can Graph Neural Networks Learn to Solve MaxSAT Problem?
- Authors: Minghao Liu, Fuqi Jia, Pei Huang, Fan Zhang, Yuchen Sun, Shaowei Cai,
Feifei Ma, Jian Zhang
- Abstract summary: We study the capability of GNNs in learning to solve MaxSAT problem, both from theoretical and practical perspectives.
We build two kinds of GNN models to learn the solution of MaxSAT instances from benchmarks, and show that GNNs have attractive potential to solve MaxSAT problem through experimental evaluation.
We also present a theoretical explanation of the effect that GNNs can learn to solve MaxSAT problem to some extent for the first time, based on the algorithmic alignment theory.
- Score: 22.528432249712637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid development of deep learning techniques, various recent work
has tried to apply graph neural networks (GNNs) to solve NP-hard problems such
as Boolean Satisfiability (SAT), which shows the potential in bridging the gap
between machine learning and symbolic reasoning. However, the quality of
solutions predicted by GNNs has not been well investigated in the literature.
In this paper, we study the capability of GNNs in learning to solve Maximum
Satisfiability (MaxSAT) problem, both from theoretical and practical
perspectives. We build two kinds of GNN models to learn the solution of MaxSAT
instances from benchmarks, and show that GNNs have attractive potential to
solve MaxSAT problem through experimental evaluation. We also present a
theoretical explanation of the effect that GNNs can learn to solve MaxSAT
problem to some extent for the first time, based on the algorithmic alignment
theory.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Combinatorial Optimization with Automated Graph Neural Networks [28.19349828026972]
We present a new class of textbfAUTOmated textbfGNNs for solving NP-hard CO problems, namely textbfAutoGNP.
The idea of AutoGNP is to use graph neural architecture search algorithms to automatically find the best GNNs for a given NP-hard optimization problem.
arXiv Detail & Related papers (2024-06-05T02:43:41Z) - torchmSAT: A GPU-Accelerated Approximation To The Maximum Satisfiability
Problem [1.5850859526672516]
We derive a single differentiable function capable of approximating solutions for the Maximum Satisfiability Problem (MaxSAT)
We present a novel neural network architecture to model our differentiable function, and progressively solve MaxSAT using backpropagation.
arXiv Detail & Related papers (2024-02-06T02:33:00Z) - G4SATBench: Benchmarking and Advancing SAT Solving with Graph Neural Networks [7.951021955925275]
Graph neural networks (GNNs) have emerged as a promising approach for solving the Boolean Satisfiability Problem (SAT)
G4SATBench is the first benchmark study that establishes a comprehensive evaluation framework for GNN-based SAT solvers.
Our results provide valuable insights into the performance of GNN-based SAT solvers.
arXiv Detail & Related papers (2023-09-29T02:50:57Z) - Cracking nuts with a sledgehammer: when modern graph neural networks do
worse than classical greedy algorithms [2.436681150766912]
We show that a simple greedy algorithm, running in almost linear time, can find solutions for the MIS problem of much better quality than the GNN.
We do not see any good reason for solving the MIS with these GNN, as well as for using a sledgehammer to crack nuts.
arXiv Detail & Related papers (2022-06-27T11:54:01Z) - Graph Neural Networks for Propositional Model Counting [1.0152838128195467]
Graph Networks (GNNs) have been recently leveraged to solve logical reasoning tasks.
We present an architecture based on the GNN framework for belief propagation (BP) of Kuch et al., extended with self-attentive GNN and trained to approximately solve the #SAT problem.
arXiv Detail & Related papers (2022-05-09T17:03:13Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.