Goal-Aware Neural SAT Solver
- URL: http://arxiv.org/abs/2106.07162v1
- Date: Mon, 14 Jun 2021 04:51:24 GMT
- Title: Goal-Aware Neural SAT Solver
- Authors: Emils Ozolins, Karlis Freivalds, Andis Draguns, Eliza Gaile, Ronalds
Zakovskis, Sergejs Kozlovics
- Abstract summary: Modern neural networks obtain information about the problem and calculate the output solely from the input values.
We argue that it is not always optimal, and the network's performance can be significantly improved by augmenting it with a query mechanism.
We propose a neural SAT solver with a query mechanism called QuerySAT and show that it outperforms the neural baseline on a wide range of SAT tasks.
- Score: 2.609784101826762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern neural networks obtain information about the problem and calculate the
output solely from the input values. We argue that it is not always optimal,
and the network's performance can be significantly improved by augmenting it
with a query mechanism that allows the network to make several solution trials
at run time and get feedback on the loss value on each trial. To demonstrate
the capabilities of the query mechanism, we formulate an unsupervised (not
dependant on labels) loss function for Boolean Satisfiability Problem (SAT) and
theoretically show that it allows the network to extract rich information about
the problem. We then propose a neural SAT solver with a query mechanism called
QuerySAT and show that it outperforms the neural baseline on a wide range of
SAT tasks and the classical baselines on SHA-1 preimage attack and 3-SAT task.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - G4SATBench: Benchmarking and Advancing SAT Solving with Graph Neural Networks [7.951021955925275]
Graph neural networks (GNNs) have emerged as a promising approach for solving the Boolean Satisfiability Problem (SAT)
G4SATBench is the first benchmark study that establishes a comprehensive evaluation framework for GNN-based SAT solvers.
Our results provide valuable insights into the performance of GNN-based SAT solvers.
arXiv Detail & Related papers (2023-09-29T02:50:57Z) - Addressing Variable Dependency in GNN-based SAT Solving [19.38746341365531]
We propose AsymSAT, a GNN-based architecture which integrates recurrent neural networks to generate dependent predictions for variable assignments.
Experiment results show that dependent variable prediction extends the solving capability of the GNN-based method as it improves the number of solved SAT instances on large test sets.
arXiv Detail & Related papers (2023-04-18T05:33:33Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Techniques for Symbol Grounding with SATNet [19.70820500506061]
We present a self-supervised pre-training pipeline that enables SATNet to overcome the Symbol Grounding Problem.
We demonstrate that our method allows SATNet to attain full accuracy even with a harder problem setup.
We additionally introduce a proofreading method that further improves the performance of SATNet architectures.
arXiv Detail & Related papers (2021-06-16T18:42:12Z) - Enhancing SAT solvers with glue variable predictions [2.635832975589208]
We modify the state-of-the-art SAT solver CaDiCaL, improving its performance on SATCOMP 2018 and SATRACE 2019 with supervised learning and its performance on a dataset of SHA-1 preimage attacks with reinforcement learning.
We address all these limitations, using a simpler network architecture allowing CPU inference for even large industrial problems with millions of clauses, and training instead to predict em glue variables---a target for which it is easier to generate labelled data, and which can also be formulated as a reinforcement learning task.
arXiv Detail & Related papers (2020-07-06T07:16:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.