An Experimental Study of Permanently Stored Learned Clauses
- URL: http://arxiv.org/abs/2110.14187v1
- Date: Wed, 27 Oct 2021 05:36:16 GMT
- Title: An Experimental Study of Permanently Stored Learned Clauses
- Authors: Sima Jamali and David Mitchell
- Abstract summary: We study the permanent clause store in MapleLCMDistChronoBT.
We show that alternate size and LBD based criteria improve performance, while still having large permanent stores.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern CDCL SAT solvers learn clauses rapidly, and an important heuristic is
the clause deletion scheme. Most current solvers have two (or more) stores of
clauses. One has ``valuable'' clauses which are never deleted. Most learned
clauses are added to the other, with an aggressive deletion strategy to
restrict its size. Recent solvers in the MapleSAT family, have comparatively
complex deletion scheme, and perform well. Many solvers store only binary
clauses permanently, but MapleLCMDistChronoBT stores clauses with small LBD
permanently. We report an experimental study of the permanent clause store in
MapleLCMDistChronoBT. We observe that this store can get quite large, but
several methods for limiting its size reduced performance. We also show that
alternate size and LBD based criteria improve performance, while still having
large permanent stores. In particular, saving clauses up to size 8, and adding
small numbers of high-centrality clauses, both improved performance, with the
best improvement using both methods.
Related papers
- Training Large Language Models to Reason in a Continuous Latent Space [84.5618790930725]
We introduce a new paradigm Coconut (Chain of Continuous Thought) to explore the potential of large language models (LLMs) reasoning in an unrestricted latent space.
Experiments show that Coconut can effectively augment the LLM on several reasoning tasks.
These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.
arXiv Detail & Related papers (2024-12-09T18:55:56Z) - Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs [61.40047491337793]
We present Hierarchical cOntext MERging (HOMER), a new training-free scheme designed to overcome the limitations of large language models.
HomeR uses a divide-and-conquer algorithm, dividing long inputs into manageable chunks.
A token reduction technique precedes each merging, ensuring memory usage efficiency.
arXiv Detail & Related papers (2024-04-16T06:34:08Z) - Truly No-Regret Learning in Constrained MDPs [61.78619476991494]
We propose a model-based primal-dual algorithm to learn in an unknown CMDP.
We prove that our algorithm achieves sublinear regret without error cancellations.
arXiv Detail & Related papers (2024-02-24T09:47:46Z) - The Ups and Downs of Large Language Model Inference with Vocabulary Trimming by Language Heuristics [74.99898531299148]
This research examines vocabulary trimming (VT) inspired by restricting embedding entries to the language of interest to bolster time and memory efficiency.
We apply two languages to trim the full vocabulary - Unicode-based script filtering and corpus-based selection - to different language families and sizes.
It is found that VT reduces the memory usage of small models by nearly 50% and has an upper bound of 25% improvement in generation speed.
arXiv Detail & Related papers (2023-11-16T09:35:50Z) - Augmenting Language Models with Long-Term Memory [142.04940250657637]
Existing large language models (LLMs) can only afford fix-sized inputs due to the input length limit.
We propose a framework, Language Models Augmented with Long-Term Memory (LongMem), which enables LLMs to memorize long history.
arXiv Detail & Related papers (2023-06-12T15:13:39Z) - Explaining SAT Solving Using Causal Reasoning [30.469229388827443]
We introduce CausalSAT, which employs causal reasoning to gain insights into the functioning of modern SAT solvers.
We use CausalSAT to quantitatively verify hypotheses previously regarded as "rules of thumb" or empirical findings.
arXiv Detail & Related papers (2023-06-09T22:53:16Z) - SC-Block: Supervised Contrastive Blocking within Entity Resolution
Pipelines [75.5113002732746]
This paper presents SC-Block, a blocking method that utilizes supervised contrastive learning for positioning records in the embedding space.
We benchmark SC-Block against eight state-of-the-art blocking methods.
For measuring the overall runtime, we determine candidate sets with 99.5% pair completeness and pass them to the matcher.
arXiv Detail & Related papers (2023-03-06T13:49:41Z) - Building Concise Logical Patterns by Constraining Tsetlin Machine Clause
Size [11.43224924974832]
This paper introduces a novel variant of TM learning - Clause Size Constrained TMs (CSC-TMs)
As soon as a clause includes more literals than the constraint allows, it starts expelling literals.
Our results show that CSC-TM maintains accuracy with up to 80 times fewer literals.
arXiv Detail & Related papers (2023-01-19T17:37:48Z) - Incorporating Multi-armed Bandit with Local Search for MaxSAT [16.371916145216737]
Two generalizations of the MaxSAT problem: Partial MaxSAT (PMS) and Weighted PMS (WPMS)
We propose a local search algorithm for these problems, called BandHS, which applies two multi-armed bandits to guide the search directions when escaping local optima.
These two bandits can improve the algorithm's search ability in both feasible and infeasible solution spaces.
arXiv Detail & Related papers (2022-11-29T08:19:26Z) - Too much information: CDCL solvers need to forget and perform restarts [0.0]
Conflict-driven clause learning (CDCL) is a remarkably successful paradigm for solving the satisfiability problem of propositional logic.
This paper will demonstrate, quite surprisingly, that clause learning can not only improve the runtime but can oftentimes deteriorate it dramatically.
arXiv Detail & Related papers (2022-02-01T10:16:04Z) - On the Effect of Learned Clauses on Stochastic Local Search [0.0]
Conflict-driven clause learning (CDCL) and local search (SLS) are used in SAT solvers.
We experimentally demonstrate that clauses with a large number of correct literals are beneficial to the runtime of SLS.
We deduce the most beneficial strategies to add high-quality clauses as a preprocessing step.
arXiv Detail & Related papers (2020-05-07T13:33:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.