Towards Projected and Incremental Pseudo-Boolean Model Counting
- URL: http://arxiv.org/abs/2412.14485v2
- Date: Fri, 20 Dec 2024 15:18:44 GMT
- Title: Towards Projected and Incremental Pseudo-Boolean Model Counting
- Authors: Suwei Yang, Kuldeep S. Meel,
- Abstract summary: We introduce the first exact Pseudo-Boolean (PB) model counter with support for projected and incremental model counting.
In our evaluations, PBCount2 completed at least 1.40x the number of benchmarks of competing methods for projected model counting and at least 1.18x of competing methods in incremental model counting.
- Score: 32.92936885170422
- License:
- Abstract: Model counting is a fundamental task that involves determining the number of satisfying assignments to a logical formula, typically in conjunctive normal form (CNF). While CNF model counting has received extensive attention over recent decades, interest in Pseudo-Boolean (PB) model counting is just emerging partly due to the greater flexibility of PB formulas. As such, we observed feature gaps in existing PB counters such as a lack of support for projected and incremental settings, which could hinder adoption. In this work, our main contribution is the introduction of the PB model counter PBCount2, the first exact PB model counter with support for projected and incremental model counting. Our counter, PBCount2, uses our Least Occurrence Weighted Min Degree (LOW-MD) computation ordering heuristic to support projected model counting and a cache mechanism to enable incremental model counting. In our evaluations, PBCount2 completed at least 1.40x the number of benchmarks of competing methods for projected model counting and at least 1.18x of competing methods in incremental model counting.
Related papers
- Model Counting in the Wild [31.05707402954459]
We conduct a rigorous assessment of the scalability of model counters in the wild.
We evaluate six state-of-the-art model counters on these instances to assess scalability and runtime performance.
Our analysis highlights the challenges and opportunities for portfolio-based approaches in model counting.
arXiv Detail & Related papers (2024-08-13T17:49:46Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - PBCounter: Weighted Model Counting on Pseudo-Boolean Formulas [18.02313966511695]
We implement a weighted PB counting tool PBCounter.
We compare PBCounter with the state-of-the-art weighted model counters SharpSAT-TD, ExactMC, D4, and ADDMC.
The experiments on three domains of benchmarks show that PBCounter is superior to the model counters on CNF formulas.
arXiv Detail & Related papers (2023-12-26T04:03:23Z) - Engineering an Exact Pseudo-Boolean Model Counter [38.901687092266094]
We propose the first exact Pseudo-Boolean model counter, PBCount, that relies on knowledge compilation approach via algebraic decision diagrams.
Our empirical evaluation shows that PBCount can compute counts for 1513 instances while the current state-of-the-art approach could only handle 1013 instances.
arXiv Detail & Related papers (2023-12-19T17:14:06Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Fast Converging Anytime Model Counting [34.295512073482186]
This paper designs a new anytime approach called PartialKC for approximate model counting.
Our empirical analysis demonstrates that PartialKC achieves significant scalability and accuracy over prior state-of-the-art approximate counters.
Interestingly, the empirical results show that PartialKC reaches convergence for many instances and therefore provides exact model counting performance comparable to state-of-the-art exact counters.
arXiv Detail & Related papers (2022-12-19T12:01:28Z) - CounTR: Transformer-based Generalised Visual Counting [94.54725247039441]
We develop a computational model for counting the number of objects from arbitrary semantic categories, using arbitrary number of "exemplars"
We conduct thorough ablation studies on the large-scale counting benchmark, e.g. FSC-147, and demonstrate state-of-the-art performance on both zero and few-shot settings.
arXiv Detail & Related papers (2022-08-29T17:02:45Z) - Low-Rank Constraints for Fast Inference in Structured Models [110.38427965904266]
This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models.
Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces.
arXiv Detail & Related papers (2022-01-08T00:47:50Z) - Projected Model Counting: Beyond Independent Support [27.606526752377615]
A key idea used in modern counters is to count models projected on an emphindependent support that is often a small subset of the projection set.
In this paper, we show that contrary to intuition, it can be beneficial to project on variables beyond the projection set.
arXiv Detail & Related papers (2021-10-18T10:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.