Evidence for Long-Tails in SLS Algorithms
- URL: http://arxiv.org/abs/2107.00378v1
- Date: Thu, 1 Jul 2021 11:31:39 GMT
- Title: Evidence for Long-Tails in SLS Algorithms
- Authors: Florian W\"orz and Jan-Hendrik Lorenz
- Abstract summary: Local search (SLS) is a successful paradigm for solving the satisfiability problem of propositional logic.
A recent development in this area involves solving not the original instance, but a modified, yet logically equivalent one.
This paper studies how this technique affects the runtimes of SLS solvers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stochastic local search (SLS) is a successful paradigm for solving the
satisfiability problem of propositional logic. A recent development in this
area involves solving not the original instance, but a modified, yet logically
equivalent one. Empirically, this technique was found to be promising as it
improves the performance of state-of-the-art SLS solvers.
Currently, there is only a shallow understanding of how this modification
technique affects the runtimes of SLS solvers. Thus, we model this modification
process and conduct an empirical analysis of the hardness of logically
equivalent formulas. Our results are twofold. First, if the modification
process is treated as a random process, a lognormal distribution perfectly
characterizes the hardness; implying that the hardness is long-tailed. This
means that the modification technique can be further improved by implementing
an additional restart mechanism. Thus, as a second contribution, we
theoretically prove that all algorithms exhibiting this long-tail property can
be further improved by restarts. Consequently, all SAT solvers employing this
modification technique can be enhanced.
Related papers
- An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Directed Acyclic Graphs With Tears [8.774590352292932]
DAGs with Tears is a new type of structure learning based on mix-integer programming.
In this work, the reason for challenge 1) is analyzed theoretically, and a novel method named DAGs with Tears method is proposed based on mix-integer programming.
In addition, prior knowledge is able to incorporate into the new proposed method, making structure learning more practical and useful in industrial processes.
arXiv Detail & Related papers (2023-02-04T13:00:52Z) - Towards an Understanding of Long-Tailed Runtimes of SLS Algorithms [0.0]
GapSAT solver demonstrated a successful way to improve the performance of an SLS solver on average by learning additional information.
We propose a method for generating logically equivalent problem formulations.
We prove that the runtimes for Sch"oning's random walk algorithm are approximately Johnson SB.
arXiv Detail & Related papers (2022-10-24T12:22:25Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - Bolstering Stochastic Gradient Descent with Model Building [0.0]
gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates.
We propose an alternative approach to line search by using a new algorithm based on forward step model building.
We show that the proposed algorithm achieves faster convergence and better generalization in well-known test problems.
arXiv Detail & Related papers (2021-11-13T06:54:36Z) - Learning to solve TV regularized problems with unrolled algorithms [18.241062505073234]
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals.
We develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures.
arXiv Detail & Related papers (2020-10-19T14:19:02Z) - Investigating the Scalability and Biological Plausibility of the
Activation Relaxation Algorithm [62.997667081978825]
Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm.
We show that the algorithm can be further simplified and made more biologically plausible by introducing a learnable set of backwards weights.
We also investigate whether another biologically implausible assumption of the original AR algorithm -- the frozen feedforward pass -- can be relaxed without damaging performance.
arXiv Detail & Related papers (2020-10-13T08:02:38Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.