Constructing a Chain Event Graph from a Staged Tree
- URL: http://arxiv.org/abs/2006.15857v2
- Date: Thu, 16 Dec 2021 11:26:03 GMT
- Title: Constructing a Chain Event Graph from a Staged Tree
- Authors: Aditi Shenvi and Jim Q. Smith
- Abstract summary: Chain Event Graphs (CEGs) are a recent family of probabilistic graphical models.
No general algorithm has yet been devised that automatically transforms any staged tree into a CEG representation.
We show that no information is lost from transforming a staged tree into a CEG.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chain Event Graphs (CEGs) are a recent family of probabilistic graphical
models - a generalisation of Bayesian Networks - providing an explicit
representation of structural zeros, structural missing values and
context-specific conditional independences within their graph topology. A CEG
is constructed from an event tree through a sequence of transformations
beginning with the colouring of the vertices of the event tree to identify
one-step transition symmetries. This coloured event tree, also known as a
staged tree, is the output of the learning algorithms used for this family.
Surprisingly, no general algorithm has yet been devised that automatically
transforms any staged tree into a CEG representation. In this paper we provide
a simple iterative backward algorithm for this transformation. Additionally, we
show that no information is lost from transforming a staged tree into a CEG.
Finally, we demonstrate that with an optimal stopping criterion, our algorithm
is more efficient than the generalisation of a special case presented in
Silander and Leong (2013). We also provide Python code using this algorithm to
obtain a CEG from any staged tree along with the functionality to add edges
with sampling zeros.
Related papers
- More on greedy construction heuristics for the MAX-CUT problem [8.148355685823521]
We show that this picture helps to classify the main greedys for the maximum cut problem.
All versions of the Sahni-Gonzalez(SG) algorithms could be classified as the Prim class.
Various Edge-Contraction(EC) algorithms are of the Kruskal class.
arXiv Detail & Related papers (2023-12-18T02:52:04Z) - Compositional Generalization without Trees using Multiset Tagging and
Latent Permutations [121.37328648951993]
We phrase semantic parsing as a two-step process: we first tag each input token with a multiset of output tokens.
Then we arrange the tokens into an output sequence using a new way of parameterizing and predicting permutations.
Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks.
arXiv Detail & Related papers (2023-05-26T14:09:35Z) - cegpy: Modelling with Chain Event Graphs in Python [0.0]
Chain event graphs (CEGs) are a recent family of probabilistic graphical models that generalise the popular Bayesian networks (BNs) family.
This paper introduces cegpy, the first Python package for learning and analysing complex processes using CEGs.
arXiv Detail & Related papers (2022-11-21T11:32:36Z) - Structural Learning of Simple Staged Trees [2.3572498744567127]
We introduce the first structural learning algorithms for the class of simple staged trees.
We show that data-learned simple staged trees often outperform Bayesian networks in model fit.
arXiv Detail & Related papers (2022-03-08T20:50:39Z) - Active-LATHE: An Active Learning Algorithm for Boosting the Error
Exponent for Learning Homogeneous Ising Trees [75.93186954061943]
We design and analyze an algorithm that boosts the error exponent by at least 40% when $rho$ is at least $0.8$.
Our analysis hinges on judiciously exploiting the minute but detectable statistical variation of the samples to allocate more data to parts of the graph.
arXiv Detail & Related papers (2021-10-27T10:45:21Z) - Structural Optimization Makes Graph Classification Simpler and Better [5.770986723520119]
We investigate the feasibility of improving graph classification performance while simplifying the model learning process.
Inspired by progress in structural information assessment, we optimize the given data sample from graphs to encoding trees.
We present an implementation of the scheme in a tree kernel and a convolutional network to perform graph classification.
arXiv Detail & Related papers (2021-09-05T08:54:38Z) - GNNAutoScale: Scalable and Expressive Graph Neural Networks via
Historical Embeddings [51.82434518719011]
GNNAutoScale (GAS) is a framework for scaling arbitrary message-passing GNNs to large graphs.
Gas prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations.
Gas reaches state-of-the-art performance on large-scale graphs.
arXiv Detail & Related papers (2021-06-10T09:26:56Z) - Robustifying Algorithms of Learning Latent Trees with Vector Variables [92.18777020401484]
We present the sample complexities of Recursive Grouping (RG) and Chow-Liu Recursive Grouping (CLRG)
We robustify RG, CLRG, Neighbor Joining (NJ) and Spectral NJ (SNJ) by using the truncated inner product.
We derive the first known instance-dependent impossibility result for structure learning of latent trees.
arXiv Detail & Related papers (2021-06-02T01:37:52Z) - Neural Trees for Learning on Graphs [19.05038106825347]
Graph Neural Networks (GNNs) have emerged as a flexible and powerful approach for learning over graphs.
We propose a new GNN architecture -- the Neural Tree.
We show that the neural tree architecture can approximate any smooth probability distribution function over an undirected graph.
arXiv Detail & Related papers (2021-05-15T17:08:20Z) - SGA: A Robust Algorithm for Partial Recovery of Tree-Structured
Graphical Models with Noisy Samples [75.32013242448151]
We consider learning Ising tree models when the observations from the nodes are corrupted by independent but non-identically distributed noise.
Katiyar et al. (2020) showed that although the exact tree structure cannot be recovered, one can recover a partial tree structure.
We propose Symmetrized Geometric Averaging (SGA), a more statistically robust algorithm for partial tree recovery.
arXiv Detail & Related papers (2021-01-22T01:57:35Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.