DPMC: Weighted Model Counting by Dynamic Programming on Project-Join
Trees
- URL: http://arxiv.org/abs/2008.08748v1
- Date: Thu, 20 Aug 2020 03:09:09 GMT
- Title: DPMC: Weighted Model Counting by Dynamic Programming on Project-Join
Trees
- Authors: Jeffrey M. Dudek, Vu H. N. Phan, Moshe Y. Vardi
- Abstract summary: We propose a unifying dynamic-programming framework to compute exact literal-weighted model counts of formulas in conjunctive normal form.
At the center of our framework are project-join trees, which specify efficient project-join orders.
We show that our dynamic-programming model-counting framework DPMC is competitive with the state-of-the-art exact weighted model counters cachet, c2d, d4, and miniC2D.
- Score: 26.25724674611307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a unifying dynamic-programming framework to compute exact
literal-weighted model counts of formulas in conjunctive normal form. At the
center of our framework are project-join trees, which specify efficient
project-join orders to apply additive projections (variable eliminations) and
joins (clause multiplications). In this framework, model counting is performed
in two phases. First, the planning phase constructs a project-join tree from a
formula. Second, the execution phase computes the model count of the formula,
employing dynamic programming as guided by the project-join tree. We
empirically evaluate various methods for the planning phase and compare
constraint-satisfaction heuristics with tree-decomposition tools. We also
investigate the performance of different data structures for the execution
phase and compare algebraic decision diagrams with tensors. We show that our
dynamic-programming model-counting framework DPMC is competitive with the
state-of-the-art exact weighted model counters cachet, c2d, d4, and miniC2D.
Related papers
- Terminating Differentiable Tree Experts [77.2443883991608]
We propose a neuro-symbolic Differentiable Tree Machine that learns tree operations using a combination of transformers and Representation Products.
We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts.
We additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically.
arXiv Detail & Related papers (2024-07-02T08:45:38Z) - Decomposing and Editing Predictions by Modeling Model Computation [75.37535202884463]
We introduce a task called component modeling.
The goal of component modeling is to decompose an ML model's prediction in terms of its components.
We present COAR, a scalable algorithm for estimating component attributions.
arXiv Detail & Related papers (2024-04-17T16:28:08Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Tree-Planner: Efficient Close-loop Task Planning with Large Language Models [63.06270302774049]
Tree-Planner reframes task planning with Large Language Models into three distinct phases.
Tree-Planner achieves state-of-the-art performance while maintaining high efficiency.
arXiv Detail & Related papers (2023-10-12T17:59:50Z) - Single-Stage Visual Relationship Learning using Conditional Queries [60.90880759475021]
TraCQ is a new formulation for scene graph generation that avoids the multi-task learning problem and the entity pair distribution.
We employ a DETR-based encoder-decoder conditional queries to significantly reduce the entity label space as well.
Experimental results show that TraCQ not only outperforms existing single-stage scene graph generation methods, it also beats many state-of-the-art two-stage methods on the Visual Genome dataset.
arXiv Detail & Related papers (2023-06-09T06:02:01Z) - Simulation-based inference of Bayesian hierarchical models while
checking for model misspecification [0.0]
This paper presents recent methodological advances to perform simulation-based inference ( SBI) of a general class of Bayesian hierarchical models (BHMs)
Our approach is based on a two-step framework. First, the latent function that appears as second layer of the BHM is inferred and used to diagnose possible model misspecification.
Second, target parameters of the trusted model are inferred via SBI. As a proof of concept, we apply our framework to a prey-predator model built upon the Lotka-Volterra equations and involving complex observational processes.
arXiv Detail & Related papers (2022-09-22T14:51:54Z) - Learning deep autoregressive models for hierarchical data [0.6445605125467573]
We propose a model for hierarchical structured data as an extension to the temporal convolutional network (STCN)
We evaluate the proposed model on two different types of sequential data: speech and handwritten text.
arXiv Detail & Related papers (2021-04-28T15:58:45Z) - SeqROCTM: A Matlab toolbox for the analysis of Sequence of Random
Objects driven by Context Tree Models [0.0]
A new class of processes, namely textitsequences of random objects driven by context tree models, has been introduced to model such relation in the context of auditory statistical learning.
This paper introduces a freely available Matlab toolbox (SeqROCTM) that implements this new class of processes and three model selection procedures to make inference on it.
arXiv Detail & Related papers (2020-09-08T15:28:32Z) - CoSE: Compositional Stroke Embeddings [52.529172734044664]
We present a generative model for complex free-form structures such as stroke-based drawing tasks.
Our approach is suitable for interactive use cases such as auto-completing diagrams.
arXiv Detail & Related papers (2020-06-17T15:22:54Z) - Tree-AMP: Compositional Inference with Tree Approximate Message Passing [23.509275850721778]
Tree-AMP is a python package for compositional inference in high-dimensional tree-structured models.
The package provides a unifying framework to study several approximate message passing algorithms.
arXiv Detail & Related papers (2020-04-03T13:51:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.