NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks
- URL: http://arxiv.org/abs/2110.14053v7
- Date: Wed, 8 May 2024 18:23:10 GMT
- Title: NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks
- Authors: Wenxi Wang, Yang Hu, Mohit Tiwari, Sarfraz Khurshid, Kenneth McMillan, Risto Miikkulainen,
- Abstract summary: Propositional satisfiability (SAT) is an NP-complete problem that impacts many research fields, such as planning, verification, and security.
Recent work aimed to enhance CDCL SAT solvers using Graph Neural Networks (GNNs)
This paper proposes an approach called NeuroBack, which builds on two insights: (1) predicting phases (i.e., values) of variables appearing in the majority (or even all) of the satisfying assignments are essential for CDCL SAT solving, and (2) it is sufficient to query the neural model only once for the predictions before the SAT solving starts.
- Score: 13.185943308470286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Propositional satisfiability (SAT) is an NP-complete problem that impacts many research fields, such as planning, verification, and security. Mainstream modern SAT solvers are based on the Conflict-Driven Clause Learning (CDCL) algorithm. Recent work aimed to enhance CDCL SAT solvers using Graph Neural Networks (GNNs). However, so far this approach either has not made solving more effective, or required substantial GPU resources for frequent online model inferences. Aiming to make GNN improvements practical, this paper proposes an approach called NeuroBack, which builds on two insights: (1) predicting phases (i.e., values) of variables appearing in the majority (or even all) of the satisfying assignments are essential for CDCL SAT solving, and (2) it is sufficient to query the neural model only once for the predictions before the SAT solving starts. Once trained, the offline model inference allows NeuroBack to execute exclusively on the CPU, removing its reliance on GPU resources. To train NeuroBack, a new dataset called DataBack containing 120,286 data samples is created. NeuroBack is implemented as an enhancement to a state-of-the-art SAT solver called Kissat. As a result, it allowed Kissat to solve up to 5.2% and 7.4% more problems on two recent SAT competition problem sets, SATCOMP-2022 and SATCOMP-2023, respectively. NeuroBack therefore shows how machine learning can be harnessed to improve SAT solving in an effective and practical manner.
Related papers
- Self-Satisfied: An end-to-end framework for SAT generation and prediction [0.7340017786387768]
We introduce hardware accelerated algorithms for fast SAT problem generation and a geometric SAT encoding.
These advances allow us to scale our approach to SAT problems with thousands of variables and tens of thousands of clauses.
A fundamental aspect of our work concerns the very nature of SAT data and its suitability for training machine learning models.
arXiv Detail & Related papers (2024-10-18T22:25:54Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection [45.222591894755105]
We present GraSS, a novel approach for automatic SAT solver selection based on tripartite graph representations of instances.
We enrich the graph representation with domain-specific decisions, such as novel node feature design.
We demonstrate that this combination of raw representations and domain-specific choices leads to improvements in runtime.
arXiv Detail & Related papers (2024-05-17T18:00:50Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - A hybrid Quantum proposal to deal with 3-SAT problem [75.38606213726906]
This paper presents and describes a hybrid quantum computing strategy for solving 3-SAT problems.
The performance of this approximation has been tested over a set of representative scenarios when dealing with 3-SAT from the quantum computing perspective.
arXiv Detail & Related papers (2023-06-07T12:19:22Z) - Addressing Variable Dependency in GNN-based SAT Solving [19.38746341365531]
We propose AsymSAT, a GNN-based architecture which integrates recurrent neural networks to generate dependent predictions for variable assignments.
Experiment results show that dependent variable prediction extends the solving capability of the GNN-based method as it improves the number of solved SAT instances on large test sets.
arXiv Detail & Related papers (2023-04-18T05:33:33Z) - DeepSAT: An EDA-Driven Learning Framework for SAT [9.111341161918375]
We present DeepSAT, a novel end-to-end learning framework for the Boolean satisfiability (SAT) problem.
DeepSAT achieves significant accuracy improvements over state-of-the-art learning-based SAT solutions.
arXiv Detail & Related papers (2022-05-27T03:20:42Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Enhancing SAT solvers with glue variable predictions [2.635832975589208]
We modify the state-of-the-art SAT solver CaDiCaL, improving its performance on SATCOMP 2018 and SATRACE 2019 with supervised learning and its performance on a dataset of SHA-1 preimage attacks with reinforcement learning.
We address all these limitations, using a simpler network architecture allowing CPU inference for even large industrial problems with millions of clauses, and training instead to predict em glue variables---a target for which it is easier to generate labelled data, and which can also be formulated as a reinforcement learning task.
arXiv Detail & Related papers (2020-07-06T07:16:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.