Too much information: CDCL solvers need to forget and perform restarts
- URL: http://arxiv.org/abs/2202.01030v1
- Date: Tue, 1 Feb 2022 10:16:04 GMT
- Title: Too much information: CDCL solvers need to forget and perform restarts
- Authors: Tom Kr\"uger and Jan-Hendrik Lorenz and Florian W\"orz
- Abstract summary: Conflict-driven clause learning (CDCL) is a remarkably successful paradigm for solving the satisfiability problem of propositional logic.
This paper will demonstrate, quite surprisingly, that clause learning can not only improve the runtime but can oftentimes deteriorate it dramatically.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conflict-driven clause learning (CDCL) is a remarkably successful paradigm
for solving the satisfiability problem of propositional logic. Instead of a
simple depth-first backtracking approach, this kind of solver learns the reason
behind occurring conflicts in the form of additional clauses. However, despite
the enormous success of CDCL solvers, there is still only a shallow
understanding of what influences the performance of these solvers in what way.
This paper will demonstrate, quite surprisingly, that clause learning
(without being able to get rid of some clauses) can not only improve the
runtime but can oftentimes deteriorate it dramatically. By conducting extensive
empirical analysis, we find that the runtime distributions of CDCL solvers are
multimodal. This multimodality can be seen as a reason for the deterioration
phenomenon described above. Simultaneously, it also gives an indication of why
clause learning in combination with clause deletion and restarts is virtually
the de facto standard of SAT solving in spite of this phenomenon. As a final
contribution, we will show that Weibull mixture distributions can accurately
describe the multimodal distributions. Thus, adding new clauses to a base
instance has an inherent effect of making runtimes long-tailed. This insight
provides a theoretical explanation as to why the techniques of restarts and
clause deletion are useful in CDCL solvers.
Related papers
- ICL-TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models [103.45785408116146]
Continual learning (CL) aims to train a model that can solve multiple tasks presented sequentially.
Recent CL approaches have achieved strong performance by leveraging large pre-trained models that generalize well to downstream tasks.
However, such methods lack theoretical guarantees, making them prone to unexpected failures.
We bridge this gap by integrating an empirically strong approach into a principled framework, designed to prevent forgetting.
arXiv Detail & Related papers (2024-10-01T12:58:37Z) - Larger Language Models Don't Care How You Think: Why Chain-of-Thought Prompting Fails in Subjective Tasks [25.562937159039038]
In-Context Learning (ICL) in Large Language Models (LLM) has emerged as the dominant technique for performing natural language tasks.
We show that ICL relies mostly on the retrieval of task priors and less so on "learning" to perform tasks.
We find that, surprisingly, Chain-of-Thought (CoT) indeed suffers from the same posterior collapse as ICL for larger language models.
arXiv Detail & Related papers (2024-09-10T03:06:17Z) - Many-Shot In-Context Learning [58.395589302800566]
Large language models (LLMs) excel at few-shot in-context learning (ICL)
We observe significant performance gains across a wide variety of generative and discriminative tasks.
Unlike few-shot learning, many-shot learning is effective at overriding pretraining biases.
arXiv Detail & Related papers (2024-04-17T02:49:26Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - Better patching using LLM prompting, via Self-Consistency [5.892272127970584]
Self-consistency (S-C) is an exciting, substantially better technique for generating explanations for problems.
This paper describes an application of the S-C approach to program repair, using the commit log on the fix as the explanation.
We achieve state-of-the art results, beating previous approaches to prompting-based program repair on the MODIT dataset.
arXiv Detail & Related papers (2023-05-31T18:28:46Z) - SCL(FOL) Can Simulate Non-Redundant Superposition Clause Learning [0.3867363075280543]
We show that SCL(FOL) can simulate the derivation of non-redundant clauses by superposition for first-order logic without equality.
Superposition-based reasoning is performed with respect to a fixed reduction ordering.
arXiv Detail & Related papers (2023-05-22T11:12:39Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z) - On the Effect of Learned Clauses on Stochastic Local Search [0.0]
Conflict-driven clause learning (CDCL) and local search (SLS) are used in SAT solvers.
We experimentally demonstrate that clauses with a large number of correct literals are beneficial to the runtime of SLS.
We deduce the most beneficial strategies to add high-quality clauses as a preprocessing step.
arXiv Detail & Related papers (2020-05-07T13:33:16Z) - Learning with Multiple Complementary Labels [94.8064553345801]
A complementary label (CL) simply indicates an incorrect class of an example, but learning with CLs results in multi-class classifiers.
We propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.
arXiv Detail & Related papers (2019-12-30T13:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.