Monte Carlo Anti-Differentiation for Approximate Weighted Model
Integration
- URL: http://arxiv.org/abs/2001.04566v1
- Date: Mon, 13 Jan 2020 23:45:10 GMT
- Title: Monte Carlo Anti-Differentiation for Approximate Weighted Model
Integration
- Authors: Pedro Zuidberg Dos Martires, Samuel Kolb
- Abstract summary: We introduce textit Monte Carlo anti-differentiation (MCAD) which computes MC approximations of anti-derivatives.
Our experiments show that equipping existing WMI solvers with MCAD yields a fast yet reliable approximate inference scheme.
- Score: 13.14502456511936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic inference in the hybrid domain, i.e. inference over
discrete-continuous domains, requires tackling two well known #P-hard problems
1)~weighted model counting (WMC) over discrete variables and 2)~integration
over continuous variables. For both of these problems inference techniques have
been developed separately in order to manage their #P-hardness, such as
knowledge compilation for WMC and Monte Carlo (MC) methods for (approximate)
integration in the continuous domain. Weighted model integration (WMI), the
extension of WMC to the hybrid domain, has been proposed as a formalism to
study probabilistic inference over discrete and continuous variables alike.
Recently developed WMI solvers have focused on exploiting structure in WMI
problems, for which they rely on symbolic integration to find the primitive of
an integrand, i.e. to perform anti-differentiation. To combine these advances
with state-of-the-art Monte Carlo integration techniques, we introduce
\textit{Monte Carlo anti-differentiation} (MCAD), which computes MC
approximations of anti-derivatives. In our empirical evaluation we substitute
the exact symbolic integration backend in an existing WMI solver with an MCAD
backend. Our experiments show that that equipping existing WMI solvers with
MCAD yields a fast yet reliable approximate inference scheme.
Related papers
- On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Enhancing SMT-based Weighted Model Integration by Structure Awareness [10.812681884889697]
Weighted Model Integration (WMI) emerged as a unifying formalism for probabilistic inference in hybrid domains.
We develop an algorithm that combines SMT-based enumeration, an efficient technique in formal verification, with an effective encoding of the problem structure.
arXiv Detail & Related papers (2023-02-13T08:55:12Z) - SMT-based Weighted Model Integration with Structure Awareness [18.615397594541665]
We develop an algorithm that combines SMT-based enumeration, an efficient technique in formal verification, with an effective encoding of the problem structure.
This allows our algorithm to avoid generating redundant models, resulting in substantial computational savings.
arXiv Detail & Related papers (2022-06-28T09:46:17Z) - Learning Multimodal VAEs through Mutual Supervision [72.77685889312889]
MEME combines information between modalities implicitly through mutual supervision.
We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes.
arXiv Detail & Related papers (2021-06-23T17:54:35Z) - Measure Theoretic Weighted Model Integration [4.324021238526106]
weighted model counting (WMC) is a popular framework to perform probabilistic inference with discrete random variables.
Recently, WMC has been extended to weighted model integration (WMI) in order to additionally handle continuous variables.
We propose a theoretically sound measure theoretic formulation of weighted model integration, which naturally reduces to weighted model counting in the absence of continuous variables.
arXiv Detail & Related papers (2021-03-25T15:11:11Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic
Constraints via Message Passing [38.559697064390015]
Weighted model integration allows to express the complex dependencies of real-world problems.
Existing WMI solvers are not ready to scale to these problems.
We devise a scalable WMI solver based on message passing, MP-WMI.
arXiv Detail & Related papers (2020-02-28T23:51:45Z) - Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning [63.64636047748605]
We develop a new theoretical framework to provide convergence guarantee for the general multi-step MAML algorithm.
In particular, our results suggest that an inner-stage step needs to be chosen inversely proportional to $N$ of inner-stage steps in order for $N$ MAML to have guaranteed convergence.
arXiv Detail & Related papers (2020-02-18T19:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.