Enhancing SAT solvers with glue variable predictions
- URL: http://arxiv.org/abs/2007.02559v1
- Date: Mon, 6 Jul 2020 07:16:49 GMT
- Title: Enhancing SAT solvers with glue variable predictions
- Authors: Jesse Michael Han
- Abstract summary: We modify the state-of-the-art SAT solver CaDiCaL, improving its performance on SATCOMP 2018 and SATRACE 2019 with supervised learning and its performance on a dataset of SHA-1 preimage attacks with reinforcement learning.
We address all these limitations, using a simpler network architecture allowing CPU inference for even large industrial problems with millions of clauses, and training instead to predict em glue variables---a target for which it is easier to generate labelled data, and which can also be formulated as a reinforcement learning task.
- Score: 2.635832975589208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern SAT solvers routinely operate at scales that make it impractical to
query a neural network for every branching decision. NeuroCore, proposed by
Selsam and Bjorner, offered a proof-of-concept that neural networks can still
accelerate SAT solvers by only periodically refocusing a score-based branching
heuristic. However, that work suffered from several limitations: their modified
solvers require GPU acceleration, further ablations showed that they were no
better than a random baseline on the SATCOMP 2018 benchmark, and their training
target of unsat cores required an expensive data pipeline which only labels
relatively easy unsatisfiable problems. We address all these limitations, using
a simpler network architecture allowing CPU inference for even large industrial
problems with millions of clauses, and training instead to predict {\em glue
variables}---a target for which it is easier to generate labelled data, and
which can also be formulated as a reinforcement learning task. We demonstrate
the effectiveness of our approach by modifying the state-of-the-art SAT solver
CaDiCaL, improving its performance on SATCOMP 2018 and SATRACE 2019 with
supervised learning and its performance on a dataset of SHA-1 preimage attacks
with reinforcement learning.
Related papers
- Self-Satisfied: An end-to-end framework for SAT generation and prediction [0.7340017786387768]
We introduce hardware accelerated algorithms for fast SAT problem generation and a geometric SAT encoding.
These advances allow us to scale our approach to SAT problems with thousands of variables and tens of thousands of clauses.
A fundamental aspect of our work concerns the very nature of SAT data and its suitability for training machine learning models.
arXiv Detail & Related papers (2024-10-18T22:25:54Z) - Machine Learning for SAT: Restricted Heuristics and New Graph
Representations [0.8870188183999854]
SAT is a fundamental NP-complete problem with many applications, including automated scheduling.
To solve large instances, SAT solvers have to rely on Booleans, e.g., choosing a branching variable in DPLL and CDCL solvers.
We suggest a strategy of making a few initial steps with a trained ML model and then releasing control to classical runtimes.
arXiv Detail & Related papers (2023-07-18T10:46:28Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks [13.185943308470286]
Propositional satisfiability (SAT) is an NP-complete problem that impacts many research fields, such as planning, verification, and security.
Recent work aimed to enhance CDCL SAT solvers using Graph Neural Networks (GNNs)
This paper proposes an approach called NeuroBack, which builds on two insights: (1) predicting phases (i.e., values) of variables appearing in the majority (or even all) of the satisfying assignments are essential for CDCL SAT solving, and (2) it is sufficient to query the neural model only once for the predictions before the SAT solving starts.
arXiv Detail & Related papers (2021-10-26T22:08:22Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Goal-Aware Neural SAT Solver [2.609784101826762]
Modern neural networks obtain information about the problem and calculate the output solely from the input values.
We argue that it is not always optimal, and the network's performance can be significantly improved by augmenting it with a query mechanism.
We propose a neural SAT solver with a query mechanism called QuerySAT and show that it outperforms the neural baseline on a wide range of SAT tasks.
arXiv Detail & Related papers (2021-06-14T04:51:24Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.