On the construction of graph models realizing given entropy vectors
- URL: http://arxiv.org/abs/2512.18702v1
- Date: Sun, 21 Dec 2025 11:38:17 GMT
- Title: On the construction of graph models realizing given entropy vectors
- Authors: Veronika E. Hubeny, Massimiliano Rota,
- Abstract summary: We present an efficient algorithm for the construction of a holographic simple tree graph model that realizes a given entropy vector.<n>We develop the toolkit of the correlation hypergraph, particularly in relation to coarse-graining and fine-graining of subsystems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an efficient algorithm for the construction of a holographic simple tree graph model that realizes a given entropy vector, subject to a specific ``chordality'' condition first introduced in arXiv:2412.18018. We further develop the toolkit of the correlation hypergraph, particularly in relation to coarse-graining and fine-graining of subsystems. We then use these techniques to take the first steps towards the generalization of this new algorithm to arbitrary (not necessarily simple) holographic tree graph models, and the ``detection'' of unrealizability of an entropy vector independently from the knowledge of holographic entropy inequalities.
Related papers
- Necessary and sufficient conditions for entropy vector realizability by holographic simple tree graph models [0.0]
We prove that the algorithm introduced in arXiv:2512.18702 for constructing a simple tree graph model realization of a given entropy vector always succeeds.<n>We highlight how techniques originally developed in holography can provide broad insights into entanglement and information theory.<n>The essential data that encodes the structure of the holographic entropy cone for an arbitrary number of parties, is the set of chordal'' extreme rays of the subadditivity cone.
arXiv Detail & Related papers (2025-12-30T22:19:15Z) - What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding [67.59552859593985]
Graph Transformers, which incorporate self-attention and positional encoding, have emerged as a powerful architecture for various graph learning tasks.
This paper introduces first theoretical investigation of a shallow Graph Transformer for semi-supervised classification.
arXiv Detail & Related papers (2024-06-04T05:30:16Z) - Beyond the Holographic Entropy Cone via Cycle Flows [0.0]
We introduce a new prescription for computing entropy vectors outside the holographic entropy cone.
We prove that the maximum cycle flow obeys both subadditivity and strong subadditivity.
We conjecture that our model similarly generalizes the entropy vectors arising from hypergraphs.
arXiv Detail & Related papers (2023-12-15T19:00:00Z) - Sparse Training of Discrete Diffusion Models for Graph Generation [45.103518022696996]
We introduce SparseDiff, a novel diffusion model based on the observation that almost all large graphs are sparse.
By selecting a subset of edges, SparseDiff effectively leverages sparse graph representations both during the noising process and within the denoising network.
Our model demonstrates state-of-the-art performance across multiple metrics on both small and large datasets.
arXiv Detail & Related papers (2023-11-03T16:50:26Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Scalable Bayesian Structure Learning for Gaussian Graphical Models Using Marginal Pseudo-likelihood [2.312692134587988]
We develop continuous-time (birth-death) and discrete-time (reversible jump) Markov chain Monte Carlo (MCMC) algorithms that efficiently explore the posterior over graph space.<n>The algorithms scale to large graph spaces, enabling parallel exploration for graphs with over 1,000 nodes.
arXiv Detail & Related papers (2023-06-30T20:37:40Z) - Discrete Graph Auto-Encoder [52.50288418639075]
We introduce a new framework named Discrete Graph Auto-Encoder (DGAE)
We first use a permutation-equivariant auto-encoder to convert graphs into sets of discrete latent node representations.
In the second step, we sort the sets of discrete latent representations and learn their distribution with a specifically designed auto-regressive model.
arXiv Detail & Related papers (2023-06-13T12:40:39Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Learning Linear Non-Gaussian Polytree Models [2.4493299476776778]
We propose new algorithms to efficiently learn graphs that are polytrees.
Our approach combines the Chow--Liu algorithm, which first learns the undirected tree structure, with novel schemes to orient the edges.
arXiv Detail & Related papers (2022-08-13T18:20:10Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Regularization of Mixture Models for Robust Principal Graph Learning [0.0]
A regularized version of Mixture Models is proposed to learn a principal graph from a distribution of $D$-dimensional data points.
Parameters of the model are iteratively estimated through an Expectation-Maximization procedure.
arXiv Detail & Related papers (2021-06-16T18:00:02Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.