Locally distinguishing a maximally entangled basis using shared entanglement
- URL: http://arxiv.org/abs/2406.13430v2
- Date: Wed, 09 Oct 2024 06:16:22 GMT
- Title: Locally distinguishing a maximally entangled basis using shared entanglement
- Authors: Somshubhro Bandyopadhyay, Vincent Russo,
- Abstract summary: We derive an exact formula for the optimum success probability for distinguishing the elements of a bipartite maximally entangled orthonormal basis.
We also present lower and upper bounds on the success probability for distinguishing the elements of an incomplete orthonormal maximally entangled basis.
- Score: 15.699822139827916
- License:
- Abstract: We consider the problem of distinguishing between the elements of a bipartite maximally entangled orthonormal basis using local operations and classical communication (LOCC) and a partially entangled state acting as a resource. We derive an exact formula for the optimum success probability and find that it corresponds to the fully entangled fraction of the resource state. The derivation consists of two steps: First, we consider a relaxation of the problem by replacing LOCC with positive-partial-transpose (PPT) measurements and establish an upper bound on the success probability as the solution of a semidefinite program, and then show that this upper bound is achieved by a teleportation-based LOCC protocol. This further implies that separable and PPT measurements provide no advantage over LOCC for this task. We also present lower and upper bounds on the success probability for distinguishing the elements of an incomplete orthonormal maximally entangled basis in the same setup.
Related papers
- Optimal Second-Order Rates for Quantum Information Decoupling [14.932939960009605]
We consider the standard quantum information decoupling, in which Alice aims to decouple her system from the environment by local operations and discarding some of her systems.
We find an achievability bound in entanglement distillation protocol, where the objective is for Alice and Bob to transform their quantum state to maximally entangled state with largest possible dimension.
arXiv Detail & Related papers (2024-03-21T12:06:30Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Typical bipartite steerability and generalized local quantum
measurements [0.0]
Recently proposed correlation-matrix based sufficient conditions for bipartite steerability from Alice to Bob are applied.
It is shown that this sufficient condition exhibits a peculiar scaling property.
Results are compared with a recently proposed method which reduces the determination of bipartite steerability from Alice's qubit to Bob's arbitrary dimensional quantum system.
arXiv Detail & Related papers (2023-05-29T09:48:12Z) - Provable Offline Preference-Based Reinforcement Learning [95.00042541409901]
We investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback.
We consider the general reward setting where the reward can be defined over the whole trajectory.
We introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability.
arXiv Detail & Related papers (2023-05-24T07:11:26Z) - Coherence measure of ensembles with nonlocality without entanglement [0.0]
We introduce a measure based on l1 norm and relative entropy of coherence whose lower values capture more quantumness in ensembles.
In particular, it reaches the maximum for two-way distinguishable product ensembles with minimum rounds.
arXiv Detail & Related papers (2021-12-08T17:41:27Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Exact Recovery in the General Hypergraph Stochastic Block Model [92.28929858529679]
This paper investigates fundamental limits of exact recovery in the general d-uniform hypergraph block model (d-HSBM)
We show that there exists a sharp threshold such that exact recovery is achievable above the threshold and impossible below it.
arXiv Detail & Related papers (2021-05-11T03:39:08Z) - On the Optimality of the Oja's Algorithm for Online PCA [1.3934770330948278]
It is proved that with high probability it performs an efficient, gap-free, global convergence rate to approximate an principal component subspace for any sub-Gaussian distribution.
It is the first time to show that the convergence rate, namely the upper bound of the approximation, exactly matches the lower bound of an approximation obtained by the offline/classical PCA up to a constant factor.
arXiv Detail & Related papers (2021-03-31T15:02:54Z) - Discrimination of quantum states under locality constraints in the
many-copy setting [18.79968161594709]
We prove that the optimal average error probability always decays exponentially in the number of copies.
We show an infinite separation between the separable (SEP) and PPT operations by providing a pair of states constructed from an unextendible product basis (UPB)
On the technical side, we prove this result by providing a quantitative version of the well-known statement that the tensor product of UPBs is a UPB.
arXiv Detail & Related papers (2020-11-25T23:26:33Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.