SATformer: Transformer-Based UNSAT Core Learning
- URL: http://arxiv.org/abs/2209.00953v2
- Date: Tue, 12 Mar 2024 02:43:40 GMT
- Title: SATformer: Transformer-Based UNSAT Core Learning
- Authors: Zhengyuan Shi (1), Min Li (1), Yi Liu (1), Sadaf Khan (1), Junhua
Huang (2), Hui-Ling Zhen (2), Mingxuan Yuan (2), Qiang Xu (1) ((1) The
Chinese University of Hong Kong, (2) Huawei Noah's Ark Lab)
- Abstract summary: This paper introduces SATformer, a Transformer-based approach for the Boolean Satisfiability (SAT) problem.
Rather than solving the problem directly, SATformer approaches the problem from the opposite direction by focusing on unsatisfiability.
SATformer is trained through a multi-task learning approach, using the single-bit satisfiability result and the minimal unsatisfiable core.
Experimental results show that our SATformer can decrease the runtime of existing solvers by an average of 21.33%.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces SATformer, a novel Transformer-based approach for the
Boolean Satisfiability (SAT) problem. Rather than solving the problem directly,
SATformer approaches the problem from the opposite direction by focusing on
unsatisfiability. Specifically, it models clause interactions to identify any
unsatisfiable sub-problems. Using a graph neural network, we convert clauses
into clause embeddings and employ a hierarchical Transformer-based model to
understand clause correlation. SATformer is trained through a multi-task
learning approach, using the single-bit satisfiability result and the minimal
unsatisfiable core (MUC) for UNSAT problems as clause supervision. As an
end-to-end learning-based satisfiability classifier, the performance of
SATformer surpasses that of NeuroSAT significantly. Furthermore, we integrate
the clause predictions made by SATformer into modern heuristic-based SAT
solvers and validate our approach with a logic equivalence checking task.
Experimental results show that our SATformer can decrease the runtime of
existing solvers by an average of 21.33%.
Related papers
- Self-Satisfied: An end-to-end framework for SAT generation and prediction [0.7340017786387768]
We introduce hardware accelerated algorithms for fast SAT problem generation and a geometric SAT encoding.
These advances allow us to scale our approach to SAT problems with thousands of variables and tens of thousands of clauses.
A fundamental aspect of our work concerns the very nature of SAT data and its suitability for training machine learning models.
arXiv Detail & Related papers (2024-10-18T22:25:54Z) - GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection [45.222591894755105]
We present GraSS, a novel approach for automatic SAT solver selection based on tripartite graph representations of instances.
We enrich the graph representation with domain-specific decisions, such as novel node feature design.
We demonstrate that this combination of raw representations and domain-specific choices leads to improvements in runtime.
arXiv Detail & Related papers (2024-05-17T18:00:50Z) - Decomposing Hard SAT Instances with Metaheuristic Optimization [52.03315747221343]
We introduce the notion of decomposition hardness (d-hardness)
We show that the d-hardness expresses an estimate of the hardness of $C$ w.r.t.
arXiv Detail & Related papers (2023-12-16T12:44:36Z) - A hybrid Quantum proposal to deal with 3-SAT problem [75.38606213726906]
This paper presents and describes a hybrid quantum computing strategy for solving 3-SAT problems.
The performance of this approximation has been tested over a set of representative scenarios when dealing with 3-SAT from the quantum computing perspective.
arXiv Detail & Related papers (2023-06-07T12:19:22Z) - Evolutionary Diversity Optimisation in Constructing Satisfying
Assignments [20.386139395296215]
This paper studies the Boolean satisfiability problem (SAT) in the context of EDO.
SAT is of great importance in computer science and differs from the other problems studied in EDO literature, such as KP and TSP.
We introduce evolutionary algorithms employing a well-known SAT solver to maximise diversity among a set of SAT solutions explicitly.
arXiv Detail & Related papers (2023-05-19T06:26:10Z) - Addressing Variable Dependency in GNN-based SAT Solving [19.38746341365531]
We propose AsymSAT, a GNN-based architecture which integrates recurrent neural networks to generate dependent predictions for variable assignments.
Experiment results show that dependent variable prediction extends the solving capability of the GNN-based method as it improves the number of solved SAT instances on large test sets.
arXiv Detail & Related papers (2023-04-18T05:33:33Z) - Estimating the hardness of SAT encodings for Logical Equivalence
Checking of Boolean circuits [58.83758257568434]
We show that the hardness of SAT encodings for LEC instances can be estimated textitw.r.t some SAT partitioning.
The paper proposes several methods for constructing partitionings, which, when used in practice, allow one to estimate the hardness of SAT encodings for LEC with good accuracy.
arXiv Detail & Related papers (2022-10-04T09:19:13Z) - A Neural Network-based SAT-Resilient Obfuscation Towards Enhanced Logic
Locking [3.076761061950216]
We propose a neural network-based unSAT clause translator, SATConda.
SATConda incurs a minimal area and power overhead while preserving the original functionality with impenetrable security.
Our proposed SATConda is evaluated on ISCAS85 and ISCAS89 benchmarks.
arXiv Detail & Related papers (2022-09-13T07:59:27Z) - DeepSAT: An EDA-Driven Learning Framework for SAT [9.111341161918375]
We present DeepSAT, a novel end-to-end learning framework for the Boolean satisfiability (SAT) problem.
DeepSAT achieves significant accuracy improvements over state-of-the-art learning-based SAT solutions.
arXiv Detail & Related papers (2022-05-27T03:20:42Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.