AutoSAT: Automatically Optimize SAT Solvers via Large Language Models
- URL: http://arxiv.org/abs/2402.10705v2
- Date: Fri, 31 May 2024 11:38:00 GMT
- Title: AutoSAT: Automatically Optimize SAT Solvers via Large Language Models
- Authors: Yiwen Sun, Xianyin Zhang, Shiyu Huang, Shaowei Cai, BingZhen Zhang, Ke Wei,
- Abstract summary: We present AutoSAT, a novel framework for automatically optimizing SAT solvers.
AutoSAT is based on Large Language Models (LLMs) which is able to autonomously generate codes.
AutoSAT operates on a plug-and-play basis, eliminating the need for extensive enterprise and model training.
- Score: 10.87846542244079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heuristics are crucial in SAT solvers, but no heuristic rules are suitable for all SAT problems. Therefore, it is helpful to refine specific heuristics for specific problems. In this context, we present AutoSAT, a novel framework for automatically optimizing heuristics in SAT solvers. AutoSAT is based on Large Language Models (LLMs) which is able to autonomously generate codes, conduct evaluation, and then utilize feedback to further optimize heuristics, thereby reducing human intervention and enhancing solver capabilities. AutoSAT operates on a plug-and-play basis, eliminating the need for extensive enterprise and model training, and fosters a Multi-Agent-based collaborative process with fault tolerance to ensure robust heuristic optimization. We implement AutoSAT on a lightweight Conflict-Driven Clause Learning (CDCL) solver EasySAT (the volume of EasySAT is about one-fiftieth of the State-of-the-Art hybrid solver Kissat) and extensive experiments on seven datasets demonstrate its superior performance. Out of the seven testing datasets, AutoSAT shows a superior performance to Kissat in two datasets and displays an overall similar performance in three datasets. Some heuristics generated by AutoSAT are even counter-intuitive but are very effective.
Related papers
- Automatically discovering heuristics in a complex SAT solver with large language models [9.96578394096009]
Satisfiability problem (SAT) is a cornerstone of computational complexity with broad industrial applications.<n>Modern SAT solvers rely on manually constrained search spaces and yield limited performance gains.<n>This work introduces a novel paradigm which effectively optimize complex SAT solvers via Large Language Models (LLMs)
arXiv Detail & Related papers (2025-07-30T17:52:25Z) - SolSearch: An LLM-Driven Framework for Efficient SAT-Solving Code Generation [13.056487325961688]
The Satisfiability (SAT) problem is a core challenge with significant applications in software engineering.
This paper presents SolSearch, a novel framework that harnesses large language models (LLMs) to discover and optimize SAT-solving strategies automatically.
arXiv Detail & Related papers (2025-02-20T07:25:21Z) - Deeply Optimizing the SAT Solver for the IC3 Algorithm [0.23749905164931204]
We introduce several optimizations for the SAT solver in IC3 based on our observations.
We replace the binary heap with buckets to achieve constant-time operations.
We develop a novel lightweight CDCL SAT solver, GipSAT, which integrates these optimizations.
arXiv Detail & Related papers (2025-01-24T12:40:43Z) - Resource-Constrained Heuristic for Max-SAT [0.848762374187855]
We propose a resource-constrained for instances of Max-SAT that iteratively decomposes a larger problem into smaller subcomponents.
We analyze a set of variable selection methods, including a novel graph-based method that exploits the structure of a given SAT instance.
We demonstrate our results on a set of randomly generated Max-SAT instances as well as real world examples from the Max-SAT evaluation benchmarks.
arXiv Detail & Related papers (2024-10-11T18:20:08Z) - Navigating the Labyrinth: Evaluating and Enhancing LLMs' Ability to Reason About Search Problems [59.72548591120689]
We introduce a new benchmark, SearchBench, containing 11 unique search problem types.
We show that even the most advanced LLMs fail to solve these problems end-to-end in text.
Instructing LLMs to generate code that solves the problem helps, but only slightly, e.g., GPT4's performance rises to 11.7%.
arXiv Detail & Related papers (2024-06-18T00:44:58Z) - Machine Learning for SAT: Restricted Heuristics and New Graph
Representations [0.8870188183999854]
SAT is a fundamental NP-complete problem with many applications, including automated scheduling.
To solve large instances, SAT solvers have to rely on Booleans, e.g., choosing a branching variable in DPLL and CDCL solvers.
We suggest a strategy of making a few initial steps with a trained ML model and then releasing control to classical runtimes.
arXiv Detail & Related papers (2023-07-18T10:46:28Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Towards Tackling MaxSAT by Combining Nested Monte Carlo with Local
Search [10.70006528984961]
We introduce two algorithmic variations over UCTMAXSAT.
First, a nesting of the tree search inspired by the Nested Monte Carlo Search algorithm is effective on most instance types in the benchmark.
Second, we observe that using a static flip limit in SLS, the ideal budget depends heavily on the instance size and we propose to set it dynamically.
arXiv Detail & Related papers (2023-02-26T03:37:26Z) - Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset
Selection [59.77647907277523]
Adversarial contrast learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks.
ACL needs tremendous running time to generate the adversarial variants of all training data.
This paper proposes a robustness-aware coreset selection (RCS) method to speed up ACL.
arXiv Detail & Related papers (2023-02-08T03:20:14Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - New Core-Guided and Hitting Set Algorithms for Multi-Objective
Combinatorial Optimization [0.0]
We present two novel unsatisfiability-based algorithms for Multi-Objective Combinatorial Optimization.
The first is a core-guided MOCO solver, the second is a hitting set-based MOCO solver.
Experimental results show that our new unsatisfiability-based algorithms can outperform state-of-the-art SAT-based algorithms for MOCO.
arXiv Detail & Related papers (2022-04-22T09:46:44Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.