AutoSAT: Automatically Optimize SAT Solvers via Large Language Models
- URL: http://arxiv.org/abs/2402.10705v3
- Date: Wed, 13 Nov 2024 15:46:08 GMT
- Title: AutoSAT: Automatically Optimize SAT Solvers via Large Language Models
- Authors: Yiwen Sun, Furong Ye, Xianyin Zhang, Shiyu Huang, Bingzhen Zhang, Ke Wei, Shaowei Cai,
- Abstract summary: We propose AutoSAT, a framework that automatically optimizes in a pre-defined modular search space based on existing CDCL solvers.
A realization of AutoSAT outperforms MiniSat on 9 out of 12 datasets and even surpasses the state-of-the-art hybrid solver Kissat on 4 datasets.
- Score: 10.359005620433136
- License:
- Abstract: Conflict-Driven Clause Learning (CDCL) is the mainstream framework for solving the Satisfiability problem (SAT), and CDCL solvers typically rely on various heuristics, which have a significant impact on their performance. Modern CDCL solvers, such as MiniSat and Kissat, commonly incorporate several heuristics and select one to use according to simple rules, requiring significant time and expert effort to fine-tune in practice. The pervasion of Large Language Models (LLMs) provides a potential solution to address this issue. However, generating a CDCL solver from scratch is not effective due to the complexity and context volume of SAT solvers. Instead, we propose AutoSAT, a framework that automatically optimizes heuristics in a pre-defined modular search space based on existing CDCL solvers. Unlike existing automated algorithm design approaches focusing on hyperparameter tuning and operator selection, AutoSAT can generate new efficient heuristics. In this first attempt at optimizing SAT solvers using LLMs, several strategies including the greedy hill climber and (1+1) Evolutionary Algorithm are employed to guide LLMs to search for better heuristics. Experimental results demonstrate that LLMs can generally enhance the performance of CDCL solvers. A realization of AutoSAT outperforms MiniSat on 9 out of 12 datasets and even surpasses the state-of-the-art hybrid solver Kissat on 4 datasets.
Related papers
- Resource-Constrained Heuristic for Max-SAT [0.848762374187855]
We propose a resource-constrained for instances of Max-SAT that iteratively decomposes a larger problem into smaller subcomponents.
We analyze a set of variable selection methods, including a novel graph-based method that exploits the structure of a given SAT instance.
We demonstrate our results on a set of randomly generated Max-SAT instances as well as real world examples from the Max-SAT evaluation benchmarks.
arXiv Detail & Related papers (2024-10-11T18:20:08Z) - Navigating the Labyrinth: Evaluating and Enhancing LLMs' Ability to Reason About Search Problems [59.72548591120689]
We introduce a new benchmark, SearchBench, containing 11 unique search problem types.
We show that even the most advanced LLMs fail to solve these problems end-to-end in text.
Instructing LLMs to generate code that solves the problem helps, but only slightly, e.g., GPT4's performance rises to 11.7%.
arXiv Detail & Related papers (2024-06-18T00:44:58Z) - Machine Learning for SAT: Restricted Heuristics and New Graph
Representations [0.8870188183999854]
SAT is a fundamental NP-complete problem with many applications, including automated scheduling.
To solve large instances, SAT solvers have to rely on Booleans, e.g., choosing a branching variable in DPLL and CDCL solvers.
We suggest a strategy of making a few initial steps with a trained ML model and then releasing control to classical runtimes.
arXiv Detail & Related papers (2023-07-18T10:46:28Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Towards Tackling MaxSAT by Combining Nested Monte Carlo with Local
Search [10.70006528984961]
We introduce two algorithmic variations over UCTMAXSAT.
First, a nesting of the tree search inspired by the Nested Monte Carlo Search algorithm is effective on most instance types in the benchmark.
Second, we observe that using a static flip limit in SLS, the ideal budget depends heavily on the instance size and we propose to set it dynamically.
arXiv Detail & Related papers (2023-02-26T03:37:26Z) - Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset
Selection [59.77647907277523]
Adversarial contrast learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks.
ACL needs tremendous running time to generate the adversarial variants of all training data.
This paper proposes a robustness-aware coreset selection (RCS) method to speed up ACL.
arXiv Detail & Related papers (2023-02-08T03:20:14Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - New Core-Guided and Hitting Set Algorithms for Multi-Objective
Combinatorial Optimization [0.0]
We present two novel unsatisfiability-based algorithms for Multi-Objective Combinatorial Optimization.
The first is a core-guided MOCO solver, the second is a hitting set-based MOCO solver.
Experimental results show that our new unsatisfiability-based algorithms can outperform state-of-the-art SAT-based algorithms for MOCO.
arXiv Detail & Related papers (2022-04-22T09:46:44Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.