A Note on Bayesian Networks with Latent Root Variables
- URL: http://arxiv.org/abs/2402.17087v1
- Date: Mon, 26 Feb 2024 23:53:34 GMT
- Title: A Note on Bayesian Networks with Latent Root Variables
- Authors: Marco Zaffalon and Alessandro Antonucci
- Abstract summary: We show that the marginal distribution over the remaining, manifest, variables also factorises as a Bayesian network, which we call empirical.
A dataset of observations of the manifest variables allows us to quantify the parameters of the empirical Bayesian net.
- Score: 56.86503578982023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We characterise the likelihood function computed from a Bayesian network with
latent variables as root nodes. We show that the marginal distribution over the
remaining, manifest, variables also factorises as a Bayesian network, which we
call empirical. A dataset of observations of the manifest variables allows us
to quantify the parameters of the empirical Bayesian net. We prove that (i) the
likelihood of such a dataset from the original Bayesian network is dominated by
the global maximum of the likelihood from the empirical one; and that (ii) such
a maximum is attained if and only if the parameters of the Bayesian network are
consistent with those of the empirical model.
Related papers
- Nonparametric Bayesian networks are typically faithful in the total variation metric [12.27570686178551]
We show that for a given DAG $G$, among all observational distributions of Bayesian networks over $G$ with arbitrary outcome spaces, the faithful distributions are typical'
As a consequence, the set of faithful distributions is non-empty, and the unfaithful distributions are nowhere dense.
arXiv Detail & Related papers (2024-10-21T13:38:04Z) - The diameter of a stochastic matrix: A new measure for sensitivity analysis in Bayesian networks [1.2699007098398807]
We argue that robustness methods based on the familiar total variation distance provide simple and more valuable bounds on robustness to misspecification.
We introduce a novel measure of dependence in conditional probability tables called the diameter to derive such bounds.
arXiv Detail & Related papers (2024-07-05T17:22:12Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - Amortised Inference in Neural Networks for Small-Scale Probabilistic
Meta-Learning [41.85464593920907]
A global inducing point variational approximation for BNNs is based on using a set of inducing inputs to construct a series of conditional distributions.
Our key insight is that these inducing inputs can be replaced by the actual data, such that the variational distribution consists of a set of approximate likelihoods for each datapoint.
By training this inference network across related datasets, we can meta-learn Bayesian inference over task-specific BNNs.
arXiv Detail & Related papers (2023-10-24T12:34:25Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Likelihoods and Parameter Priors for Bayesian Networks [7.005458308454871]
We introduce several assumptions that permit the construction of likelihoods and parameter priors for a large number of Bayesian-network structures.
We present a method for directly computing the marginal likelihood of a random sample with no missing observations.
arXiv Detail & Related papers (2021-05-13T12:45:44Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Uncertainty Reasoning for Probabilistic Petri Nets via Bayesian Networks [1.471992435706872]
We exploit extended Bayesian networks for uncertainty reasoning on Petri nets.
In particular, Bayesian networks are used as symbolic representations of probability distributions.
We show how to derive information from a modular Bayesian net.
arXiv Detail & Related papers (2020-09-30T17:40:54Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.