AlphaMapleSAT: An MCTS-based Cube-and-Conquer SAT Solver for Hard
Combinatorial Problems
- URL: http://arxiv.org/abs/2401.13770v1
- Date: Wed, 24 Jan 2024 19:37:10 GMT
- Title: AlphaMapleSAT: An MCTS-based Cube-and-Conquer SAT Solver for Hard
Combinatorial Problems
- Authors: Piyush Jha, Zhengyu Li, Zhengyang Lu, Curtis Bright, Vijay Ganesh
- Abstract summary: This paper introduces AlphaMapleSAT, a novel Monte Carlo Tree Search (MCTS) based Cube-and-Conquer (CnC) SAT solving method.
By contrast, our key innovation is a deductively-driven MCTS-based lookahead cubing technique, that performs a deeper search to find effective cubes.
- Score: 13.450216199781671
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces AlphaMapleSAT, a novel Monte Carlo Tree Search (MCTS)
based Cube-and-Conquer (CnC) SAT solving method aimed at efficiently solving
challenging combinatorial problems. Despite the tremendous success of CnC
solvers in solving a variety of hard combinatorial problems, the lookahead
cubing techniques at the heart of CnC have not evolved much for many years.
Part of the reason is the sheer difficulty of coming up with new cubing
techniques that are both low-cost and effective in partitioning input formulas
into sub-formulas, such that the overall runtime is minimized.
Lookahead cubing techniques used by current state-of-the-art CnC solvers,
such as March, keep their cubing costs low by constraining the search for the
optimal splitting variables. By contrast, our key innovation is a
deductively-driven MCTS-based lookahead cubing technique, that performs a
deeper heuristic search to find effective cubes, while keeping the cubing cost
low. We perform an extensive comparison of AlphaMapleSAT against the March CnC
solver on challenging combinatorial problems such as the minimum Kochen-Specker
and Ramsey problems. We also perform ablation studies to verify the efficacy of
the MCTS heuristic search for the cubing problem. Results show up to 2.3x
speedup in parallel (and up to 27x in sequential) elapsed real time.
Related papers
- Smart Cubing for Graph Search: A Comparative Study [15.989407883187962]
Parallel solving via cube-and-conquer is a key method for scaling SAT solvers to hard instances.
While cube-and-conquer has proven successful for pure SAT problems, its application to SAT solvers extended with propagators presents unique challenges.
We study this problem using SAT Modulo Symmetries (SMS) as our primary test case, where a symmetry-breaking propagator reduces the search space by learning constraints that eliminate isomorphic graphs.
arXiv Detail & Related papers (2025-01-27T22:15:54Z) - Learning a Prior for Monte Carlo Search by Replaying Solutions to
Combinatorial Problems [4.561007128508218]
We propose a method to automatically compute a prior.
It is a simple and general method that incurs no computational cost at playout time.
The method is applied to three difficult problems: Latin Square Completion, Kakuro, and Inverse RNA Folding.
arXiv Detail & Related papers (2024-01-19T00:22:31Z) - A Novel Normalized-Cut Solver with Nearest Neighbor Hierarchical
Initialization [107.07093621337084]
Normalized-Cut (N-Cut) is a famous model of spectral clustering.
Traditional N-Cut solvers are two-stage: 1) calculating the continuous spectral embedding of normalized Laplacian matrix; 2) discretization via $K$-means or spectral rotation.
We propose a novel N-Cut solver based on the famous coordinate descent method.
arXiv Detail & Related papers (2023-11-26T07:11:58Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Supplementing Recurrent Neural Networks with Annealing to Solve
Combinatorial Optimization Problems [2.3642391095636484]
In this paper, we demonstrate the potential of using variational classical annealing (VCA) as an approach to solving real-world optimization problems.
We find that VCA outperforms simulated annealing (SA) on average in the limit by one or more orders of magnitude in terms of relative error.
We conclude that in the best case scenario, VCA can serve as a great alternative when SA fails to find the optimal solution.
arXiv Detail & Related papers (2022-07-17T14:33:17Z) - QAOA-in-QAOA: solving large-scale MaxCut problems on small quantum
machines [81.4597482536073]
Quantum approximate optimization algorithms (QAOAs) utilize the power of quantum machines and inherit the spirit of adiabatic evolution.
We propose QAOA-in-QAOA ($textQAOA2$) to solve arbitrary large-scale MaxCut problems using quantum machines.
Our method can be seamlessly embedded into other advanced strategies to enhance the capability of QAOAs in large-scale optimization problems.
arXiv Detail & Related papers (2022-05-24T03:49:10Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Self-Supervision is All You Need for Solving Rubik's Cube [0.0]
This work introduces a simple and efficient deep learning method for solving problems with a predefined goal, represented by Rubik's Cube.
We demonstrate that, for such problems, training a deep neural network on random scrambles branching from the goal state is sufficient to achieve near-optimal solutions.
arXiv Detail & Related papers (2021-06-06T15:38:50Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.