Learning Branching Heuristics for Propositional Model Counting
- URL: http://arxiv.org/abs/2007.03204v2
- Date: Thu, 8 Sep 2022 21:47:20 GMT
- Title: Learning Branching Heuristics for Propositional Model Counting
- Authors: Pashootan Vaezipoor, Gil Lederman, Yuhuai Wu, Chris J. Maddison, Roger
Grosse, Sanjit A. Seshia, Fahiem Bacchus
- Abstract summary: #SAT is the problem of computing the number of satisfying assignments of a Boolean formula.
Neuro# is an approach for learning branchings to improve the performance of exact #SAT solvers on instances from a given family of problems.
- Score: 27.343084935521752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Propositional model counting, or #SAT, is the problem of computing the number
of satisfying assignments of a Boolean formula. Many problems from different
application areas, including many discrete probabilistic inference problems,
can be translated into model counting problems to be solved by #SAT solvers.
Exact #SAT solvers, however, are often not scalable to industrial size
instances. In this paper, we present Neuro#, an approach for learning branching
heuristics to improve the performance of exact #SAT solvers on instances from a
given family of problems. We experimentally show that our method reduces the
step count on similarly distributed held-out instances and generalizes to much
larger instances from the same problem family. It is able to achieve these
results on a number of different problem families having very different
structures. In addition to step count improvements, Neuro# can also achieve
orders of magnitude wall-clock speedups over the vanilla solver on larger
instances in some problem families, despite the runtime overhead of querying
the model.
Related papers
- GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection [45.222591894755105]
We present GraSS, a novel approach for automatic SAT solver selection based on tripartite graph representations of instances.
We enrich the graph representation with domain-specific decisions, such as novel node feature design.
We demonstrate that this combination of raw representations and domain-specific choices leads to improvements in runtime.
arXiv Detail & Related papers (2024-05-17T18:00:50Z) - Using deep learning to construct stochastic local search SAT solvers
with performance bounds [0.0]
We train oracles using Graph Neural Networks and evaluate them on two SLS solvers on random SAT instances of varying difficulty.
We find that access to GNN-based oracles significantly boosts the performance of both solvers.
arXiv Detail & Related papers (2023-09-20T16:27:52Z) - Machine Learning for SAT: Restricted Heuristics and New Graph
Representations [0.8870188183999854]
SAT is a fundamental NP-complete problem with many applications, including automated scheduling.
To solve large instances, SAT solvers have to rely on Booleans, e.g., choosing a branching variable in DPLL and CDCL solvers.
We suggest a strategy of making a few initial steps with a trained ML model and then releasing control to classical runtimes.
arXiv Detail & Related papers (2023-07-18T10:46:28Z) - RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning [53.52699766206808]
We propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning.
We evaluate RetICL on math word problem solving and scientific question answering tasks and show that it consistently outperforms or matches and learnable baselines.
arXiv Detail & Related papers (2023-05-23T20:15:56Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Graph Neural Networks for Propositional Model Counting [1.0152838128195467]
Graph Networks (GNNs) have been recently leveraged to solve logical reasoning tasks.
We present an architecture based on the GNN framework for belief propagation (BP) of Kuch et al., extended with self-attentive GNN and trained to approximately solve the #SAT problem.
arXiv Detail & Related papers (2022-05-09T17:03:13Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Provable Reinforcement Learning with a Short-Term Memory [68.00677878812908]
We study a new subclass of POMDPs, whose latent states can be decoded by the most recent history of a short length $m$.
In particular, in the rich-observation setting, we develop new algorithms using a novel "moment matching" approach with a sample complexity that scales exponentially.
Our results show that a short-term memory suffices for reinforcement learning in these environments.
arXiv Detail & Related papers (2022-02-08T16:39:57Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.