First-Order Context-Specific Likelihood Weighting in Hybrid
Probabilistic Logic Programs
- URL: http://arxiv.org/abs/2201.11165v1
- Date: Wed, 26 Jan 2022 20:06:02 GMT
- Title: First-Order Context-Specific Likelihood Weighting in Hybrid
Probabilistic Logic Programs
- Authors: Nitesh Kumar, Ondrej Kuzelka, Luc De Raedt
- Abstract summary: Three types of independencies are important to represent and exploit for scalable inference in hybrid models.
This paper introduces a hybrid probabilistic logic programming language, DC#, which integrates distributional clauses' syntax and semantics principles of Bayesian logic programs.
We also introduce the scalable inference algorithm FO-CS-LW for DC#.
- Score: 24.503581751619787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Statistical relational AI and probabilistic logic programming have so far
mostly focused on discrete probabilistic models. The reasons for this is that
one needs to provide constructs to succinctly model the independencies in such
models, and also provide efficient inference.
Three types of independencies are important to represent and exploit for
scalable inference in hybrid models: conditional independencies elegantly
modeled in Bayesian networks, context-specific independencies naturally
represented by logical rules, and independencies amongst attributes of related
objects in relational models succinctly expressed by combining rules.
This paper introduces a hybrid probabilistic logic programming language, DC#,
which integrates distributional clauses' syntax and semantics principles of
Bayesian logic programs. It represents the three types of independencies
qualitatively. More importantly, we also introduce the scalable inference
algorithm FO-CS-LW for DC#. FO-CS-LW is a first-order extension of the
context-specific likelihood weighting algorithm (CS-LW), a novel sampling
method that exploits conditional independencies and context-specific
independencies in ground models. The FO-CS-LW algorithm upgrades CS-LW with
unification and combining rules to the first-order case.
Related papers
- The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Semirings for Probabilistic and Neuro-Symbolic Logic Programming [15.747744148181829]
We show that many extensions of probabilistic logic programming can be cast within a common algebraic logic programming framework.
This does not only hold for the PLP variations itself but also for the underlying execution mechanism that is based on (algebraic) model counting.
arXiv Detail & Related papers (2024-02-21T13:06:52Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Declarative Probabilistic Logic Programming in Discrete-Continuous Domains [16.153683223016973]
We contribute the measure semantics together with the hybrid PLP language DC-ProbLog and its inference engine infinitesimal algebraic likelihood weighting (IALW)
We generalize the state of the art of PLP towards hybrid PLP in three different aspects: semantics, language and inference.
IALW is the first inference algorithm for hybrid probabilistic programming based on knowledge compilation.
arXiv Detail & Related papers (2023-02-21T13:50:38Z) - Hybrid Probabilistic Logic Programming: Inference and Learning [1.14219428942199]
This thesis focuses on advancing probabilistic logic programming (PLP), which combines probability theory for uncertainty and logic programming for relations.
The first contribution is the introduction of context-specific likelihood weighting (CS-LW), a new sampling algorithm that exploits context-specific independencies for computational gains.
Next, a new hybrid PLP, DC#, is introduced, which integrates the syntax of Distributional Clauses with Bayesian logic programs and represents three types of independencies.
The scalable inference algorithm FO-CS-LW is introduced for DC#.
arXiv Detail & Related papers (2023-02-01T15:07:36Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Conditional independence by typing [30.194205448457385]
A central goal of probabilistic programming languages (PPLs) is to separate modelling from inference.
Conditional independence (CI) relationships among parameters are a crucial aspect of probabilistic models.
We show that for a well-typed program in our system, the distribution it implements is guaranteed to have certain CI-relationships.
arXiv Detail & Related papers (2020-10-22T17:27:22Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Struct-MMSB: Mixed Membership Stochastic Blockmodels with Interpretable
Structured Priors [13.712395104755783]
Mixed membership blockmodel (MMSB) is a popular framework for community detection and network generation.
We present a flexible MMSB model, textitStruct-MMSB, that uses a recently developed statistical relational learning model, hinge-loss Markov random fields (HL-MRFs)
Our model is capable of learning latent characteristics in real-world networks via meaningful latent variables encoded as a complex combination of observed features and membership distributions.
arXiv Detail & Related papers (2020-02-21T19:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.