Asymptotic robustness of entanglement in noisy quantum networks and graph connectivity
- URL: http://arxiv.org/abs/2411.12548v1
- Date: Tue, 19 Nov 2024 15:01:50 GMT
- Title: Asymptotic robustness of entanglement in noisy quantum networks and graph connectivity
- Authors: Fernando Lledó, Carlos Palazuelos, Julio I. de Vicente,
- Abstract summary: We show that when the links are noisy two drastically different behaviors can occur regarding the global entanglement properties of the network.
While in certain configurations the network displays genuine multipartite entanglement (GME) for any system size provided the noise level is below a certain threshold, in others GME is washed out if the system size is big enough for any fixed non-zero level of noise.
- Score: 46.44827993583994
- License:
- Abstract: Quantum networks are promising venues for quantum information processing. This motivates the study of the entanglement properties of the particular multipartite quantum states that underpin these structures. In particular, it has been recently shown that when the links are noisy two drastically different behaviors can occur regarding the global entanglement properties of the network. While in certain configurations the network displays genuine multipartite entanglement (GME) for any system size provided the noise level is below a certain threshold, in others GME is washed out if the system size is big enough for any fixed non-zero level of noise. However, this difference has only been established considering the two extreme cases of maximally and minimally connected networks (i.e. complete graphs versus trees, respectively). In this article we investigate this question much more in depth and relate this behavior to the growth of several graph theoretic parameters that measure the connectivity of the graph sequence that codifies the structure of the network as the number of parties increases. The strongest conditions are obtained when considering the degree growth. Our main results are that a sufficiently fast degree growth (i.e. $\Omega(N)$, where $N$ is the size of the network) is sufficient for asymptotic robustness of GME, while if it is sufficiently slow (i.e. $o(\log N)$) then the network becomes asymptotically biseparable. We also present several explicit constructions related to the optimality of these results.
Related papers
- Sharp Bounds for Poly-GNNs and the Effect of Graph Noise [12.108529628556944]
We investigate the classification performance of graph neural networks with graph-polynomial features, poly-GNNs.
Our analysis highlights and quantifies the impact of graph noise'' in deep GNNs.
Our analysis also reveals subtle differences between even and odd-layered GNNs in how the feature noise propagates.
arXiv Detail & Related papers (2024-07-28T19:23:56Z) - Understanding Heterophily for Graph Neural Networks [42.640057865981156]
We present theoretical understandings of the impacts of different heterophily patterns for Graph Neural Networks (GNNs)
We show that the separability gains are determined by the normalized distance of the $l$-powered neighborhood distributions.
Experiments on both synthetic and real-world data verify the effectiveness of our theory.
arXiv Detail & Related papers (2024-01-17T11:01:28Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - On the quantum simulation of complex networks [0.0]
Continuous-time quantum walk algorithms assume that we can simulate the dynamics of quantum systems where the Hamiltonian is given by the adjacency matrix of the graph.
We extend the state-of-the-art results on quantum simulation to graphs that contain a small number of hubs, but that are otherwise sparse.
arXiv Detail & Related papers (2022-12-12T18:55:31Z) - Recovering the Graph Underlying Networked Dynamical Systems under
Partial Observability: A Deep Learning Approach [7.209528581296429]
We study the problem of graph structure identification, i.e., of recovering the graph of dependencies among time series.
We devise a new feature vector computed from the observed time series and prove that these features are linearly separable.
We use these features to train Convolutional Neural Networks (CNNs)
arXiv Detail & Related papers (2022-08-08T20:32:28Z) - Simple and Efficient Heterogeneous Graph Neural Network [55.56564522532328]
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure.
This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN)
arXiv Detail & Related papers (2022-07-06T10:01:46Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - Emergent complex quantum networks in continuous-variables non-Gaussian
states [0.0]
We study a class of continuous-variable quantum states that present both multipartite entanglement and non-Gaussian statistics.
In particular, the states are built from an initial imprinted cluster state created via Gaussian entangling operations.
arXiv Detail & Related papers (2020-12-31T13:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.