Learning Interpretable Heuristics for WalkSAT
- URL: http://arxiv.org/abs/2307.04608v1
- Date: Mon, 10 Jul 2023 14:52:14 GMT
- Title: Learning Interpretable Heuristics for WalkSAT
- Authors: Yannet Interian and Sara Bernardini
- Abstract summary: We present an approach for learning effective variable scoring functions and noise parameters by using reinforcement learning.
Our experimental results show improvements with respect to both a WalkSAT baseline and another local search learned.
- Score: 0.34265828682659694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Local search algorithms are well-known methods for solving large, hard
instances of the satisfiability problem (SAT). The performance of these
algorithms crucially depends on heuristics for setting noise parameters and
scoring variables. The optimal setting for these heuristics varies for
different instance distributions. In this paper, we present an approach for
learning effective variable scoring functions and noise parameters by using
reinforcement learning. We consider satisfiability problems from different
instance distributions and learn specialized heuristics for each of them. Our
experimental results show improvements with respect to both a WalkSAT baseline
and another local search learned heuristic.
Related papers
- Noise to the Rescue: Escaping Local Minima in Neurosymbolic Local Search [50.24983453990065]
We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving.
We propose the Godel Trick, which adds noise to the model's logits to escape local optima.
arXiv Detail & Related papers (2025-03-03T18:42:13Z) - Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation [18.672184596814077]
We propose taking into account the variation in the ranking function output as a random variable representing a proxy performance metric.
During the search process, we strive to construct a ordering of the performance metrics to determine the best architecture.
Our experiments show that the proposed ordering can effectively boost performance of a search on standard benchmark search spaces.
arXiv Detail & Related papers (2025-02-27T01:01:22Z) - Feature-based Evolutionary Diversity Optimization of Discriminating Instances for Chance-constrained Optimization Problems [9.617143859697322]
We evolve benchmarking instances for chance-constrained optimization problems that contain components characterized by their expected values and variances.
Our method successfully generates diverse instances based on different features while effectively distinguishing the performance between a pair of algorithms.
arXiv Detail & Related papers (2025-01-24T06:55:54Z) - ANOVA-boosting for Random Fourier Features [0.0]
Our algorithms are able to find an index set of important input variables and variable interactions reliably.
Our algorithms have the advantage of interpretability, meaning that the influence of every input variable is known in the learned model.
arXiv Detail & Related papers (2024-04-03T20:34:18Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Winning Prize Comes from Losing Tickets: Improve Invariant Learning by
Exploring Variant Parameters for Out-of-Distribution Generalization [76.27711056914168]
Out-of-Distribution (OOD) Generalization aims to learn robust models that generalize well to various environments without fitting to distribution-specific features.
Recent studies based on Lottery Ticket Hypothesis (LTH) address this problem by minimizing the learning target to find some of the parameters that are critical to the task.
We propose Exploring Variant parameters for Invariant Learning (EVIL) which also leverages the distribution knowledge to find the parameters that are sensitive to distribution shift.
arXiv Detail & Related papers (2023-10-25T06:10:57Z) - Score-Based Methods for Discrete Optimization in Deep Learning [30.446056972242616]
We investigate a score-based approximation framework to solve such problems.
We experimentally demonstrate, in adversarial set classification tasks, that our method achieves a superior trade-off in terms of speed and solution quality compared to methods.
arXiv Detail & Related papers (2023-10-15T17:14:17Z) - Using deep learning to construct stochastic local search SAT solvers
with performance bounds [0.0]
We train oracles using Graph Neural Networks and evaluate them on two SLS solvers on random SAT instances of varying difficulty.
We find that access to GNN-based oracles significantly boosts the performance of both solvers.
arXiv Detail & Related papers (2023-09-20T16:27:52Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Nonequilibrium Monte Carlo for unfreezing variables in hard
combinatorial optimization [1.1783108699768]
We introduce a quantum-inspired family of nonlocal Nonequilibrium Monte Carlo (NMC) algorithms by developing an adaptive gradient-free strategy.
We observe significant speedup and robustness over both specialized solvers and generic solvers.
arXiv Detail & Related papers (2021-11-26T17:45:32Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.