Complex Markov Logic Networks: Expressivity and Liftability
- URL: http://arxiv.org/abs/2002.10259v2
- Date: Thu, 16 Jul 2020 13:04:58 GMT
- Title: Complex Markov Logic Networks: Expressivity and Liftability
- Authors: Ondrej Kuzelka
- Abstract summary: We study expressivity of Markov logic networks (MLNs)
We introduce complex MLNs, which use complex-valued weights.
We show that, unlike standard MLNs with real-valued weights, complex MLNs are fully expressive.
- Score: 10.635097939284751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study expressivity of Markov logic networks (MLNs). We introduce complex
MLNs, which use complex-valued weights, and we show that, unlike standard MLNs
with real-valued weights, complex MLNs are fully expressive. We then observe
that discrete Fourier transform can be computed using weighted first order
model counting (WFOMC) with complex weights and use this observation to design
an algorithm for computing relational marginal polytopes which needs
substantially less calls to a WFOMC oracle than a recent algorithm.
Related papers
- Generalization of Modular Spread Complexity for Non-Hermitian Density Matrices [0.0]
In this work we generalize the concept of modular spread complexity to the cases where the reduced density matrix is non-Hermitian.
We define the quantity pseudo-capacity which generalizes capacity of entanglement, and corresponds to the early modular-time measure of pseudo-modular complexity.
We show some analytical calculations for 2-level systems and 4-qubit models and then do numerical investigations on the quantum phase transition of transverse field Ising model.
arXiv Detail & Related papers (2024-10-07T17:59:16Z) - Model-Based RL for Mean-Field Games is not Statistically Harder than Single-Agent RL [57.745700271150454]
We study the sample complexity of reinforcement learning in Mean-Field Games (MFGs) with model-based function approximation.
We introduce the Partial Model-Based Eluder Dimension (P-MBED), a more effective notion to characterize the model class complexity.
arXiv Detail & Related papers (2024-02-08T14:54:47Z) - A General Framework for Learning from Weak Supervision [93.89870459388185]
This paper introduces a general framework for learning from weak supervision (GLWS) with a novel algorithm.
Central to GLWS is an Expectation-Maximization (EM) formulation, adeptly accommodating various weak supervision sources.
We also present an advanced algorithm that significantly simplifies the EM computational demands.
arXiv Detail & Related papers (2024-02-02T21:48:50Z) - On the Representational Capacity of Recurrent Neural Language Models [56.19166912044362]
We show that a rationally weighted RLM with computation time can simulate any deterministic probabilistic Turing machine (PTM) with rationally weighted transitions.
We also provide a lower bound by showing that under the restriction to real-time computation, such models can simulate deterministic real-time rational PTMs.
arXiv Detail & Related papers (2023-10-19T17:39:47Z) - Exact and general decoupled solutions of the LMC Multitask Gaussian Process model [28.32223907511862]
The Linear Model of Co-regionalization (LMC) is a very general model of multitask gaussian process for regression or classification.
Recent work has shown that under some conditions the latent processes of the model can be decoupled, leading to a complexity that is only linear in the number of said processes.
We here extend these results, showing from the most general assumptions that the only condition necessary to an efficient exact computation of the LMC is a mild hypothesis on the noise model.
arXiv Detail & Related papers (2023-10-18T15:16:24Z) - Deep Stochastic Processes via Functional Markov Transition Operators [59.55961312230447]
We introduce a new class of Processes (SPs) constructed by stacking sequences of neural parameterised Markov transition operators in function space.
We prove that these Markov transition operators can preserve the exchangeability and consistency of SPs.
arXiv Detail & Related papers (2023-05-24T21:15:23Z) - On the Equivalence of the Weighted Tsetlin Machine and the Perceptron [12.48513712803069]
Tsetlin Machine (TM) has been gaining popularity as an inherently interpretable machine leaning method.
Although possessing favorable properties, TM has not been the go-to method for AI applications.
arXiv Detail & Related papers (2022-12-27T22:38:59Z) - Approximate Message Passing for Multi-Layer Estimation in Rotationally
Invariant Models [15.605031496980775]
We present a new class of approximate message passing (AMP) algorithms and give a state evolution recursion.
Our results show that this complexity gain comes at little to no cost in the performance of the algorithm.
arXiv Detail & Related papers (2022-12-03T08:10:35Z) - Relational Reasoning via Set Transformers: Provable Efficiency and
Applications to MARL [154.13105285663656]
A cooperative Multi-A gent R einforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in real-world applications.
Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works.
We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents.
arXiv Detail & Related papers (2022-09-20T16:42:59Z) - Learning Neural Network Quantum States with the Linear Method [0.0]
We show that the linear method can be used successfully for the optimization of complex valued neural network quantum states.
We compare the LM to the state-of-the-art SR algorithm and find that the LM requires up to an order of magnitude fewer iterations for convergence.
arXiv Detail & Related papers (2021-04-22T12:18:33Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.