Lower bounds on bipartite entanglement in noisy graph states
- URL: http://arxiv.org/abs/2404.09014v1
- Date: Sat, 13 Apr 2024 14:01:45 GMT
- Title: Lower bounds on bipartite entanglement in noisy graph states
- Authors: Aqil Sajjad, Eneet Kaur, Kenneth Goodenough, Don Towsley, Saikat Guha,
- Abstract summary: We consider a noise model where the initial qubits undergo depolarizing noise before the application of the CZ operations.
We find a family of graph states that maintain a strictly positive coherent information for any amount of (non-maximal) depolarizing noise.
- Score: 8.59730790789283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph states are a key resource for a number of applications in quantum information theory. Due to the inherent noise in noisy intermediate-scale quantum (NISQ) era devices, it is important to understand the effects noise has on the usefulness of graph states. We consider a noise model where the initial qubits undergo depolarizing noise before the application of the CZ operations that generate edges between qubits situated at the nodes of the resulting graph state. For this model we develop a method for calculating the coherent information -- a lower bound on the rate at which entanglement can be distilled, across a bipartition of the graph state. We also identify some patterns on how adding more nodes or edges affects the bipartite distillable entanglement. As an application, we find a family of graph states that maintain a strictly positive coherent information for any amount of (non-maximal) depolarizing noise.
Related papers
- Random Geometric Graph Alignment with Graph Neural Networks [8.08963638000146]
We show that a graph neural network can recover an unknown one-to-one mapping between the vertices of two graphs.
We also prove that our conditions on the noise level are tight up to logarithmic factors.
We demonstrate that when the noise level is at least constant this direct matching fails to have perfect recovery while the graph neural network can tolerate noise level growing as fast as a power of the size of the graph.
arXiv Detail & Related papers (2024-02-12T00:18:25Z) - Useful entanglement can be extracted from noisy graph states [0.0]
Cluster states and graph states in general offer a useful model of the stabilizer formalism.
We leverage both properties to design feasible families of states that can be used as robust building blocks of quantum computation.
We show that robust entanglement can be extracted by proper design of the linear graph with only a minimal overhead of the physical qubits.
arXiv Detail & Related papers (2024-02-01T19:00:05Z) - Causal Layering via Conditional Entropy [85.01590667411956]
Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates.
We provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle.
arXiv Detail & Related papers (2024-01-19T05:18:28Z) - Root Cause Explanation of Outliers under Noisy Mechanisms [50.59446568076628]
Causal processes are often modelled as graphs with entities being nodes and their paths/interconnections as edge.
Existing work only consider the contribution of nodes in the generative process.
We consider both individual edge and node of each mechanism when identifying the root causes.
arXiv Detail & Related papers (2023-12-19T03:24:26Z) - Combating Bilateral Edge Noise for Robust Link Prediction [56.43882298843564]
We propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies.
Experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations.
arXiv Detail & Related papers (2023-11-02T12:47:49Z) - Multipartite Entanglement Distribution in Quantum Networks using Subgraph Complementations [9.32782060570252]
We propose a novel approach for distributing graph states across a quantum network.
We show that the distribution of graph states can be characterized by a system of subgraph complementations.
We find a close to optimal sequence of subgraph complementation operations to distribute an arbitrary graph state.
arXiv Detail & Related papers (2023-08-25T23:03:25Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - DiGress: Discrete Denoising diffusion for graph generation [79.13904438217592]
DiGress is a discrete denoising diffusion model for generating graphs with categorical node and edge attributes.
It achieves state-of-the-art performance on molecular and non-molecular datasets, with up to 3x validity improvement.
It is also the first model to scale to the large GuacaMol dataset containing 1.3M drug-like molecules.
arXiv Detail & Related papers (2022-09-29T12:55:03Z) - How Powerful is Implicit Denoising in Graph Neural Networks [33.01155523195073]
We conduct a comprehensive theoretical study and analyze when and why the implicit denoising happens in GNNs.
Our theoretical analysis suggests that the implicit denoising largely depends on the connectivity, the graph size, and GNN architectures.
We derive a robust graph convolution, where the smoothness of the node representations and the implicit denoising effect can be enhanced.
arXiv Detail & Related papers (2022-09-29T02:19:39Z) - Reasoning Graph Networks for Kinship Verification: from Star-shaped to
Hierarchical [85.0376670244522]
We investigate the problem of facial kinship verification by learning hierarchical reasoning graph networks.
We develop a Star-shaped Reasoning Graph Network (S-RGN) to exploit more powerful and flexible capacity.
We also develop a Hierarchical Reasoning Graph Network (H-RGN) to exploit more powerful and flexible capacity.
arXiv Detail & Related papers (2021-09-06T03:16:56Z) - Learning Node Representations from Noisy Graph Structures [38.32421350245066]
Noises prevail in real-world networks, which compromise networks to a large extent.
We propose a novel framework to learn noise-free node representations and eliminate noises simultaneously.
arXiv Detail & Related papers (2020-12-04T07:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.