On the Approximability of Weighted Model Integration on DNF Structures
- URL: http://arxiv.org/abs/2002.06726v3
- Date: Mon, 13 Jul 2020 09:27:12 GMT
- Title: On the Approximability of Weighted Model Integration on DNF Structures
- Authors: Ralph Abboud, \.Ismail \.Ilkan Ceylan, Radoslav Dimitrov
- Abstract summary: We show that weighted model integration on DNF structures can indeed be approximated for a class of weight functions.
Our approximation algorithm is based on three subroutines, each which can be a weak (i.e., approximate), or a strong (i.e., exact) oracle.
- Score: 13.986963122264632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weighted model counting (WMC) consists of computing the weighted sum of all
satisfying assignments of a propositional formula. WMC is well-known to be
#P-hard for exact solving, but admits a fully polynomial randomized
approximation scheme (FPRAS) when restricted to DNF structures. In this work,
we study weighted model integration, a generalization of weighted model
counting which involves real variables in addition to propositional variables,
and pose the following question: Does weighted model integration on DNF
structures admit an FPRAS? Building on classical results from approximate
volume computation and approximate weighted model counting, we show that
weighted model integration on DNF structures can indeed be approximated for a
class of weight functions. Our approximation algorithm is based on three
subroutines, each of which can be a weak (i.e., approximate), or a strong
(i.e., exact) oracle, and in all cases, comes along with accuracy guarantees.
We experimentally verify our approach over randomly generated DNF instances of
varying sizes, and show that our algorithm scales to large problem instances,
involving up to 1K variables, which are currently out of reach for existing,
general-purpose weighted model integration solvers.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Lifted Algorithms for Symmetric Weighted First-Order Model Sampling [7.007419073974192]
We prove the domain-liftability under sampling for the two-variables fragment of first-order logic with counting quantifiers.
We then show that this result continues to hold even in the presence of cardinality constraints.
Our algorithm outperforms a start-of-the-art WMS sampler by a substantial margin.
arXiv Detail & Related papers (2023-08-17T07:40:47Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Bayesian Learning of Coupled Biogeochemical-Physical Models [28.269731698116257]
Predictive models for marine ecosystems are used for a variety of needs.
Due to sparse measurements and limited understanding of the myriad of ocean processes, there is significant uncertainty.
We develop a Bayesian model learning methodology that allows handling in the space of candidate models and discovery of new models.
arXiv Detail & Related papers (2022-11-12T17:49:18Z) - Git Re-Basin: Merging Models modulo Permutation Symmetries [3.5450828190071655]
We show how simple algorithms can be used to fit large networks in practice.
We demonstrate the first (to our knowledge) demonstration of zero mode connectivity between independently trained models.
We also discuss shortcomings in the linear mode connectivity hypothesis.
arXiv Detail & Related papers (2022-09-11T10:44:27Z) - Partial Counterfactual Identification from Observational and
Experimental Data [83.798237968683]
We develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data.
Our algorithms are validated extensively on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-12T02:21:30Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Collaborative Nonstationary Multivariate Gaussian Process Model [2.362467745272567]
We propose a novel model called the collaborative nonstationary Gaussian process model(CNMGP)
CNMGP allows us to model data in which outputs do not share a common input set, with a computational complexity independent of the size of the inputs and outputs.
We show that our model generally pro-vides better predictive performance than the state-of-the-art, and also provides estimates of time-varying correlations that differ across outputs.
arXiv Detail & Related papers (2021-06-01T18:25:22Z) - Measure Theoretic Weighted Model Integration [4.324021238526106]
weighted model counting (WMC) is a popular framework to perform probabilistic inference with discrete random variables.
Recently, WMC has been extended to weighted model integration (WMI) in order to additionally handle continuous variables.
We propose a theoretically sound measure theoretic formulation of weighted model integration, which naturally reduces to weighted model counting in the absence of continuous variables.
arXiv Detail & Related papers (2021-03-25T15:11:11Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.