Measure Theoretic Weighted Model Integration
- URL: http://arxiv.org/abs/2103.13901v1
- Date: Thu, 25 Mar 2021 15:11:11 GMT
- Title: Measure Theoretic Weighted Model Integration
- Authors: Ivan Miosic, Pedro Zuidberg Dos Martires
- Abstract summary: weighted model counting (WMC) is a popular framework to perform probabilistic inference with discrete random variables.
Recently, WMC has been extended to weighted model integration (WMI) in order to additionally handle continuous variables.
We propose a theoretically sound measure theoretic formulation of weighted model integration, which naturally reduces to weighted model counting in the absence of continuous variables.
- Score: 4.324021238526106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Weighted model counting (WMC) is a popular framework to perform probabilistic
inference with discrete random variables. Recently, WMC has been extended to
weighted model integration (WMI) in order to additionally handle continuous
variables. At their core, WMI problems consist of computing integrals and sums
over weighted logical formulas. From a theoretical standpoint, WMI has been
formulated by patching the sum over weighted formulas, which is already present
in WMC, with Riemann integration. A more principled approach to integration,
which is rooted in measure theory, is Lebesgue integration. Lebesgue
integration allows one to treat discrete and continuous variables on equal
footing in a principled fashion. We propose a theoretically sound measure
theoretic formulation of weighted model integration, which naturally reduces to
weighted model counting in the absence of continuous variables. Instead of
regarding weighted model integration as an extension of weighted model
counting, WMC emerges as a special case of WMI in our formulation.
Related papers
- Model aggregation: minimizing empirical variance outperforms minimizing
empirical error [0.29008108937701327]
We propose a data-driven framework that aggregates predictions from diverse models into a single, more accurate output.
It is non-intrusive - treating models as black-box functions - model-agnostic, requires minimal assumptions, and can combine outputs from a wide range of models.
We show how it successfully integrates traditional solvers with machine learning models to improve both robustness and accuracy.
arXiv Detail & Related papers (2024-09-25T18:33:21Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Tempered Calculus for ML: Application to Hyperbolic Model Embedding [70.61101116794549]
Most mathematical distortions used in ML are fundamentally integral in nature.
In this paper, we unveil a grounded theory and tools which can help improve these distortions to better cope with ML requirements.
We show how to apply it to a problem that has recently gained traction in ML: hyperbolic embeddings with a "cheap" and accurate encoding along the hyperbolic vsean scale.
arXiv Detail & Related papers (2024-02-06T17:21:06Z) - Distributional Learning of Variational AutoEncoder: Application to
Synthetic Data Generation [0.7614628596146602]
We propose a new approach that expands the model capacity without sacrificing the computational advantages of the VAE framework.
Our VAE model's decoder is composed of an infinite mixture of asymmetric Laplace distribution.
We apply the proposed model to synthetic data generation, and particularly, our model demonstrates superiority in easily adjusting the level of data privacy.
arXiv Detail & Related papers (2023-02-22T11:26:50Z) - Super-model ecosystem: A domain-adaptation perspective [101.76769818069072]
This paper attempts to establish the theoretical foundation for the emerging super-model paradigm via domain adaptation.
Super-model paradigms help reduce computational and data cost and carbon emission, which is critical to AI industry.
arXiv Detail & Related papers (2022-08-30T09:09:43Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Eigenstate entanglement in integrable collective spin models [0.0]
The average entanglement entropy (EE) of the energy eigenstates in non-vanishing partitions has been recently proposed as a diagnostic of integrability in quantum many-body systems.
We numerically demonstrate that the aforementioned average EE in the thermodynamic limit is universal for all parameter values of the LMG model.
arXiv Detail & Related papers (2021-08-22T23:00:04Z) - Bayesian Quadrature on Riemannian Data Manifolds [79.71142807798284]
A principled way to model nonlinear geometric structure inherent in data is provided.
However, these operations are typically computationally demanding.
In particular, we focus on Bayesian quadrature (BQ) to numerically compute integrals over normal laws.
We show that by leveraging both prior knowledge and an active exploration scheme, BQ significantly reduces the number of required evaluations.
arXiv Detail & Related papers (2021-02-12T17:38:04Z) - On the Approximability of Weighted Model Integration on DNF Structures [13.986963122264632]
We show that weighted model integration on DNF structures can indeed be approximated for a class of weight functions.
Our approximation algorithm is based on three subroutines, each which can be a weak (i.e., approximate), or a strong (i.e., exact) oracle.
arXiv Detail & Related papers (2020-02-17T00:29:41Z) - Monte Carlo Anti-Differentiation for Approximate Weighted Model
Integration [13.14502456511936]
We introduce textit Monte Carlo anti-differentiation (MCAD) which computes MC approximations of anti-derivatives.
Our experiments show that equipping existing WMI solvers with MCAD yields a fast yet reliable approximate inference scheme.
arXiv Detail & Related papers (2020-01-13T23:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.