Projected Model Counting: Beyond Independent Support
- URL: http://arxiv.org/abs/2110.09171v1
- Date: Mon, 18 Oct 2021 10:40:22 GMT
- Title: Projected Model Counting: Beyond Independent Support
- Authors: Jiong Yang, Supratik Chakraborty, Kuldeep S. Meel
- Abstract summary: A key idea used in modern counters is to count models projected on an emphindependent support that is often a small subset of the projection set.
In this paper, we show that contrary to intuition, it can be beneficial to project on variables beyond the projection set.
- Score: 27.606526752377615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The past decade has witnessed a surge of interest in practical techniques for
projected model counting. Despite significant advancements, however,
performance scaling remains the Achilles' heel of this field. A key idea used
in modern counters is to count models projected on an \emph{independent
support} that is often a small subset of the projection set, i.e. original set
of variables on which we wanted to project. While this idea has been effective
in scaling performance, the question of whether it can benefit to count models
projected on variables beyond the projection set, has not been explored. In
this paper, we study this question and show that contrary to intuition, it can
be beneficial to project on variables beyond the projection set. In
applications such as verification of binarized neural networks, quantification
of information flow, reliability of power grids etc., a good upper bound of the
projected model count often suffices. We show that in several such cases, we
can identify a set of variables, called upper bound support (UBS), that is not
necessarily a subset of the projection set, and yet counting models projected
on UBS guarantees an upper bound of the true projected model count.
Theoretically, a UBS can be exponentially smaller than the smallest independent
support. Our experiments show that even otherwise, UBS-based projected counting
can be more efficient than independent support-based projected counting, while
yielding bounds of very high quality. Based on extensive experiments, we find
that UBS-based projected counting can solve many problem instances that are
beyond the reach of a state-of-the-art independent support-based projected
model counter.
Related papers
- Towards Projected and Incremental Pseudo-Boolean Model Counting [32.92936885170422]
We introduce the first exact Pseudo-Boolean (PB) model counter with support for projected and incremental model counting.
In our evaluations, PBCount2 completed at least 1.40x the number of benchmarks of competing methods for projected model counting and at least 1.18x of competing methods in incremental model counting.
arXiv Detail & Related papers (2024-12-19T03:11:33Z) - FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects [55.77542145604758]
FoundationPose is a unified foundation model for 6D object pose estimation and tracking.
Our approach can be instantly applied at test-time to a novel object without fine-tuning.
arXiv Detail & Related papers (2023-12-13T18:28:09Z) - How Predictable Are Large Language Model Capabilities? A Case Study on
BIG-bench [52.11481619456093]
We study the performance prediction problem on experiment records from BIG-bench.
An $R2$ score greater than 95% indicates the presence of learnable patterns within the experiment records.
We find a subset as informative as BIG-bench Hard for evaluating new model families, while being $3times$ smaller.
arXiv Detail & Related papers (2023-05-24T09:35:34Z) - Fast Converging Anytime Model Counting [34.295512073482186]
This paper designs a new anytime approach called PartialKC for approximate model counting.
Our empirical analysis demonstrates that PartialKC achieves significant scalability and accuracy over prior state-of-the-art approximate counters.
Interestingly, the empirical results show that PartialKC reaches convergence for many instances and therefore provides exact model counting performance comparable to state-of-the-art exact counters.
arXiv Detail & Related papers (2022-12-19T12:01:28Z) - Multi-Target XGBoostLSS Regression [91.3755431537592]
We present an extension of XGBoostLSS that models multiple targets and their dependencies in a probabilistic regression setting.
Our approach outperforms existing GBMs with respect to runtime and compares well in terms of accuracy.
arXiv Detail & Related papers (2022-10-13T08:26:14Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - MASSIVE: Tractable and Robust Bayesian Learning of Many-Dimensional
Instrumental Variable Models [8.271859911016719]
We propose a general and efficient causal inference algorithm that accounts for model uncertainty.
We show that, as long as some of the candidates are (close to) valid, without knowing a priori which ones, they collectively still pose enough restrictions on the target interaction to obtain a reliable causal effect estimate.
arXiv Detail & Related papers (2020-12-18T10:06:55Z) - Learning Ising models from one or multiple samples [26.00403702328348]
We provide guarantees for one-sample estimation, quantifying the estimation error in terms of the metric entropy of a family of interaction matrices.
Our technical approach benefits from sparsifying a model's interaction network, conditioning on subsets of variables that make the dependencies in the resulting conditional distribution sufficiently weak.
arXiv Detail & Related papers (2020-04-20T15:17:05Z) - Particle-Gibbs Sampling For Bayesian Feature Allocation Models [77.57285768500225]
Most widely used MCMC strategies rely on an element wise Gibbs update of the feature allocation matrix.
We have developed a Gibbs sampler that can update an entire row of the feature allocation matrix in a single move.
This sampler is impractical for models with a large number of features as the computational complexity scales exponentially in the number of features.
We develop a Particle Gibbs sampler that targets the same distribution as the row wise Gibbs updates, but has computational complexity that only grows linearly in the number of features.
arXiv Detail & Related papers (2020-01-25T22:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.