Probabilistic Dependency Graphs
- URL: http://arxiv.org/abs/2012.10800v1
- Date: Sat, 19 Dec 2020 22:29:49 GMT
- Title: Probabilistic Dependency Graphs
- Authors: Oliver Richardson, Joseph Y Halpern
- Abstract summary: We introduce Probabilistic Dependency Graphs (PDGs)
PDGs can capture inconsistent beliefs in a natural way.
We show how PDGs are an especially natural modeling tool.
- Score: 14.505867475659274
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Probabilistic Dependency Graphs (PDGs), a new class of directed
graphical models. PDGs can capture inconsistent beliefs in a natural way and
are more modular than Bayesian Networks (BNs), in that they make it easier to
incorporate new information and restructure the representation. We show by
example how PDGs are an especially natural modeling tool. We provide three
semantics for PDGs, each of which can be derived from a scoring function (on
joint distributions over the variables in the network) that can be viewed as
representing a distribution's incompatibility with the PDG. For the PDG
corresponding to a BN, this function is uniquely minimized by the distribution
the BN represents, showing that PDG semantics extend BN semantics. We show
further that factor graphs and their exponential families can also be
faithfully represented as PDGs, while there are significant barriers to
modeling a PDG with a factor graph.
Related papers
- Inference for Probabilistic Dependency Graphs [42.03917543423699]
Probabilistic dependency graphs (PDGs) are a flexible class of probabilistic models.
We present the first tractable inference algorithm for PDGs with discrete variables.
arXiv Detail & Related papers (2023-11-09T18:40:12Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Graph Neural Networks Intersect Probabilistic Graphical Models: A Survey [0.0]
We study the intersection of Graph Neural Networks (GNNs) and Probabilistic Graphical Models (PGMs)
GNNs can benefit from learning structured representations in PGMs, generate explainable predictions by PGMs, and how PGMs can infer object relationships.
We summarize the benchmark datasets used in recent studies and discuss promising future directions.
arXiv Detail & Related papers (2022-05-24T03:36:25Z) - Loss as the Inconsistency of a Probabilistic Dependency Graph: Choose
Your Model, Not Your Loss Function [0.0]
We show that many standard loss functions arise as the inconsistency of a natural PDG describing the appropriate scenario.
We also show that the PDG inconsistency captures a large class of statistical divergences.
We observe that inconsistency becomes the log partition function (free energy) in the setting where PDGs are factor graphs.
arXiv Detail & Related papers (2022-02-24T01:51:21Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Crime Prediction with Graph Neural Networks and Multivariate Normal
Distributions [18.640610803366876]
We tackle the sparsity problem in high resolution by leveraging the flexible structure of graph convolutional networks (GCNs)
We build our model with Graph Convolutional Gated Recurrent Units (Graph-ConvGRU) to learn spatial, temporal, and categorical relations.
We show that our model is not only generative but also precise.
arXiv Detail & Related papers (2021-11-29T17:37:01Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - PGM-Explainer: Probabilistic Graphical Model Explanations for Graph
Neural Networks [27.427529601958334]
We propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for Graph Neural Networks (GNNs)
Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction.
Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks.
arXiv Detail & Related papers (2020-10-12T15:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.