Simultaneous variances of Pauli strings, weighted independence numbers, and a new kind of perfection of graphs
- URL: http://arxiv.org/abs/2511.13531v1
- Date: Mon, 17 Nov 2025 16:05:28 GMT
- Title: Simultaneous variances of Pauli strings, weighted independence numbers, and a new kind of perfection of graphs
- Authors: Zhen-Peng Xu, Jie Wang, Qi Ye, Gereon Koßmann, René Schwonnek, Andreas Winter,
- Abstract summary: A graph that encodes its commutatitivity structure, i.e., by its frustration graph, provides a natural interface between graph theory and quantum information.<n>We call this class $hbar$-perfect, as it extends the class of perfect and $h$-perfect graphs.<n>We find efficient schemes for entanglement detection, a connection to the complexity of shadow tomography, tight uncertainty relations and a construction for computing good lower on ground state energies.
- Score: 13.60873698530933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A set of Pauli stings is well characterized by the graph that encodes its commutatitivity structure, i.e., by its frustration graph. This graph provides a natural interface between graph theory and quantum information, which we explore in this work. We investigate all aspects of this interface for a special class of graphs that bears tight connections between the groundstate structures of a spin systems and topological structure of a graph. We call this class $\hbar$-perfect, as it extends the class of perfect and $h$-perfect graphs. Having an $\hbar$-perfect graph opens up several applications: we find efficient schemes for entanglement detection, a connection to the complexity of shadow tomography, tight uncertainty relations and a construction for computing good lower on bounds ground state energies. Conversely this also induces quantum algorithms for computing the independence number. Albeit those algorithms do not immediately promise an advantage in runtime, we show that an approximate Hamilton encoding of the independence number can be achieved with an amount of qubits that typically scales logarithmically in the number of vertices. We also we also determine the behavior of $\hbar$-perfectness under basic graph operations and evaluate their prevalence among all graphs.
Related papers
- SWING: Unlocking Implicit Graph Representations for Graph Random Features [57.956136773668476]
We propose SWING: Space Walks for Implicit Network Graphs, a new class of algorithms for computations involving Graph Random Features on graphs.<n>We provide detailed analysis of SWING and complement it with thorough experiments on different classes of i-graphs.
arXiv Detail & Related papers (2026-02-13T08:12:38Z) - Multi-View Graph Learning with Graph-Tuple [13.991938982402807]
We introduce a multi-view graph-tuple framework for efficient Graph Neural Networks (GNNs)<n>Instead of a single graph, our graph-tuple framework partitions the graph into disjoint subgraphs, capturing primary local interactions and weaker, long-range connections.<n>We instantiate our framework on two scientific domains: molecular property prediction from feature-scarce Coulomb matrices and cosmological parameter inference from geometric point clouds.
arXiv Detail & Related papers (2025-10-11T20:57:03Z) - Data-Adaptive Graph Framelets with Generalized Vanishing Moments for Graph Machine Learning [4.498623132806496]
We construct tight framelet systems on graphs with localized supports based on partition trees.<n>We exploit the generality of our proposed graph framelet systems for heterophilous graph learning.<n>We derive a specific system of graph framelets and propose a method to select framelets as features for neural network input.
arXiv Detail & Related papers (2023-09-07T07:49:43Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Learning Graphon Autoencoders for Generative Graph Modeling [91.32624399902755]
Graphon is a nonparametric model that generates graphs with arbitrary sizes and can be induced from graphs easily.
We propose a novel framework called textitgraphon autoencoder to build an interpretable and scalable graph generative model.
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons.
arXiv Detail & Related papers (2021-05-29T08:11:40Z) - A Thorough View of Exact Inference in Graphs from the Degree-4
Sum-of-Squares Hierarchy [37.34153902687548]
We tackle the problem of exactly recovering an unknown ground-truth binary labeling of the nodes from a single corrupted observation of each edge.
We apply a hierarchy of relaxations known as the sum-of-squares hierarchy, to the problem.
We show that the solution of the dual of the relaxed problem is related to finding edge weights of the Johnson and Kneser graphs.
arXiv Detail & Related papers (2021-02-16T08:36:19Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Faster Graph Embeddings via Coarsening [25.37181684580123]
Graph embeddings are a ubiquitous tool for machine learning tasks, such as node classification and link prediction, on graph-structured data.
computing the embeddings for large-scale graphs is prohibitively inefficient even if we are interested only in a small subset of relevant vertices.
We present an efficient graph coarsening approach, based on Schur complements, for computing the embedding of the relevant vertices.
arXiv Detail & Related papers (2020-07-06T15:22:25Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.