On Projectivity in Markov Logic Networks
- URL: http://arxiv.org/abs/2204.04009v1
- Date: Fri, 8 Apr 2022 11:37:53 GMT
- Title: On Projectivity in Markov Logic Networks
- Authors: Sagar Malhotra and Luciano Serafini
- Abstract summary: Logic Logic Networks (MLNs) define a probability distribution on structures over varying domain sizes.
Projective models potentially allow efficient and consistent parameter learning from sub-sampled domains.
- Score: 7.766921168069532
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Markov Logic Networks (MLNs) define a probability distribution on relational
structures over varying domain sizes. Many works have noticed that MLNs, like
many other relational models, do not admit consistent marginal inference over
varying domain sizes. Furthermore, MLNs learnt on a certain domain do not
generalize to new domains of varied sizes. In recent works, connections have
emerged between domain size dependence, lifted inference and learning from
sub-sampled domains. The central idea to these works is the notion of
projectivity. The probability distributions ascribed by projective models
render the marginal probabilities of sub-structures independent of the domain
cardinality. Hence, projective models admit efficient marginal inference,
removing any dependence on the domain size. Furthermore, projective models
potentially allow efficient and consistent parameter learning from sub-sampled
domains. In this paper, we characterize the necessary and sufficient conditions
for a two-variable MLN to be projective. We then isolate a special model in
this class of MLNs, namely Relational Block Model (RBM). We show that, in terms
of data likelihood maximization, RBM is the best possible projective MLN in the
two-variable fragment. Finally, we show that RBMs also admit consistent
parameter learning over sub-sampled domains.
Related papers
- Understanding Domain-Size Generalization in Markov Logic Networks [1.8434042562191815]
We study the generalization behavior of Markov Logic Networks (MLNs) across relational structures of different sizes.
We quantify this inconsistency and bound it in terms of the variance of the MLN parameters.
We observe that solutions known to decrease the variance of the MLN parameters, like regularization and Domain-Size Aware MLNs, increase the internal consistency of the MLNs.
arXiv Detail & Related papers (2024-03-23T21:16:56Z) - Virtual Classification: Modulating Domain-Specific Knowledge for
Multidomain Crowd Counting [67.38137379297717]
Multidomain crowd counting aims to learn a general model for multiple diverse datasets.
Deep networks prefer modeling distributions of the dominant domains instead of all domains, which is known as domain bias.
We propose a Modulating Domain-specific Knowledge Network (MDKNet) to handle the domain bias issue in multidomain crowd counting.
arXiv Detail & Related papers (2024-02-06T06:49:04Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Super-model ecosystem: A domain-adaptation perspective [101.76769818069072]
This paper attempts to establish the theoretical foundation for the emerging super-model paradigm via domain adaptation.
Super-model paradigms help reduce computational and data cost and carbon emission, which is critical to AI industry.
arXiv Detail & Related papers (2022-08-30T09:09:43Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Projectivity revisited [0.0]
We extend the notion of projectivity from families of distributions indexed by domain size to functors taking extensional data from a database.
This makes projectivity available for the large range of applications taking structured input.
arXiv Detail & Related papers (2022-07-01T18:54:36Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z) - A Complete Characterization of Projectivity for Statistical Relational
Models [20.833623839057097]
We introduce a class of directed latent graphical variable models that precisely correspond to the class of projective relational models.
We also obtain a characterization for when a given distribution over size-$k$ structures is the statistical frequency distribution of size-$k$ sub-structures in much larger size-$n$ structures.
arXiv Detail & Related papers (2020-04-23T05:58:27Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.