Reconsidering Dependency Networks from an Information Geometry
Perspective
- URL: http://arxiv.org/abs/2107.00871v1
- Date: Fri, 2 Jul 2021 07:05:11 GMT
- Title: Reconsidering Dependency Networks from an Information Geometry
Perspective
- Authors: Kazuya Takabatake, Shotaro Akaho
- Abstract summary: Dependency networks are potential probabilistic graphical models for systems comprising a large number of variables.
The structure of a dependency network is represented by a directed graph, and each node has a conditional probability table.
We show that the dependency network and the Bayesian network have roughly the same performance in terms of the accuracy of their learned distributions.
- Score: 2.6778110563115542
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Dependency networks (Heckerman et al., 2000) are potential probabilistic
graphical models for systems comprising a large number of variables. Like
Bayesian networks, the structure of a dependency network is represented by a
directed graph, and each node has a conditional probability table. Learning and
inference are realized locally on individual nodes; therefore, computation
remains tractable even with a large number of variables. However, the
dependency network's learned distribution is the stationary distribution of a
Markov chain called pseudo-Gibbs sampling and has no closed-form expressions.
This technical disadvantage has impeded the development of dependency networks.
In this paper, we consider a certain manifold for each node. Then, we can
interpret pseudo-Gibbs sampling as iterative m-projections onto these
manifolds. This interpretation provides a theoretical bound for the location
where the stationary distribution of pseudo-Gibbs sampling exists in
distribution space. Furthermore, this interpretation involves structure and
parameter learning algorithms as optimization problems. In addition, we compare
dependency and Bayesian networks experimentally. The results demonstrate that
the dependency network and the Bayesian network have roughly the same
performance in terms of the accuracy of their learned distributions. The
results also show that the dependency network can learn much faster than the
Bayesian network.
Related papers
- GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Backward and Forward Inference in Interacting Independent-Cascade
Processes: A Scalable and Convergent Message-Passing Approach [1.1470070927586018]
We study the problems of estimating the past and future evolutions of two diffusion processes that spread concurrently on a network.
We derive the exact joint probability of the initial state of the network and the observation-snapshot $mathcalO_n$.
arXiv Detail & Related papers (2023-10-29T20:03:38Z) - ResolvNet: A Graph Convolutional Network with multi-scale Consistency [47.98039061491647]
We introduce the concept of multi-scale consistency.
At the graph-level, multi-scale consistency refers to the fact that distinct graphs describing the same object at different resolutions should be assigned similar feature vectors.
We introduce ResolvNet, a flexible graph neural network based on the mathematical concept of resolvents.
arXiv Detail & Related papers (2023-09-30T16:46:45Z) - Evaluating Robustness and Uncertainty of Graph Models Under Structural
Distributional Shifts [43.40315460712298]
In node-level problems of graph learning, distributional shifts can be especially complex.
We propose a general approach for inducing diverse distributional shifts based on graph structure.
We show that simple models often outperform more sophisticated methods on the considered structural shifts.
arXiv Detail & Related papers (2023-02-27T15:25:21Z) - Computational Complexity of Learning Neural Networks: Smoothness and
Degeneracy [52.40331776572531]
We show that learning depth-$3$ ReLU networks under the Gaussian input distribution is hard even in the smoothed-analysis framework.
Our results are under a well-studied assumption on the existence of local pseudorandom generators.
arXiv Detail & Related papers (2023-02-15T02:00:26Z) - Bayesian Detection of Mesoscale Structures in Pathway Data on Graphs [0.0]
mesoscale structures are integral part of the abstraction and analysis of complex systems.
They can represent communities in social or citation networks, roles in corporate interactions, or core-periphery structures in transportation networks.
We derive a Bayesian approach that simultaneously models the optimal partitioning of nodes in groups and the optimal higher-order network dynamics.
arXiv Detail & Related papers (2023-01-16T12:45:33Z) - Bayesian Structure Learning with Generative Flow Networks [85.84396514570373]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) from data.
Recently, a class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling.
We show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs.
arXiv Detail & Related papers (2022-02-28T15:53:10Z) - Multi-Source Data Fusion Outage Location in Distribution Systems via
Probabilistic Graph Models [1.7205106391379026]
We propose a multi-source data fusion approach to locate outage events in partially observable distribution systems.
A novel aspect of the proposed approach is that it takes multi-source evidence and the complex structure of distribution systems into account.
Our method can radically reduce the computational complexity of outage location inference in high-dimensional spaces.
arXiv Detail & Related papers (2020-12-04T22:34:20Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.