A class of Bell diagonal entanglement witnesses in $\mathbb{C}^4 \otimes
\mathbb{C}^4$: optimization and the spanning property
- URL: http://arxiv.org/abs/2112.15183v1
- Date: Thu, 30 Dec 2021 19:22:58 GMT
- Title: A class of Bell diagonal entanglement witnesses in $\mathbb{C}^4 \otimes
\mathbb{C}^4$: optimization and the spanning property
- Authors: Anindita Bera, Filip A. Wudarski, Gniewomir Sarbicki, Dariusz
Chru\'sci\'nski
- Abstract summary: Two classes of diagonal indecomposable entanglement witnesses in $mathbbC4 otimes mathbbC4$ are considered.
We find a generalization of the well-known Choi witness from $mathbbC3 otimes mathbbC3$, while the second one contains the reduction map.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Two classes of Bell diagonal indecomposable entanglement witnesses in
$\mathbb{C}^4 \otimes \mathbb{C}^4$ are considered. Within the first class, we
find a generalization of the well-known Choi witness from $\mathbb{C}^3 \otimes
\mathbb{C}^3$, while the second one contains the reduction map. Interestingly,
contrary to $\mathbb{C}^3 \otimes \mathbb{C}^3$ case, the generalized Choi
witnesses are no longer optimal. We perform an optimization procedure of
finding spanning vectors, that eventually gives rise to optimal witnesses.
Operators from the second class turn out to be optimal, however, without the
spanning property. This analysis sheds a new light into the intricate structure
of optimal entanglement witnesses.
Related papers
- Fast UCB-type algorithms for stochastic bandits with heavy and super
heavy symmetric noise [45.60098988395789]
We propose a new algorithm for constructing UCB-type algorithms for multi-armed bandits.
We show that in the case of symmetric noise in the reward, we can achieve an $O(log TsqrtKTlog T)$ regret bound instead of $Oleft.
arXiv Detail & Related papers (2024-02-10T22:38:21Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - A Newton-CG based barrier-augmented Lagrangian method for general
nonconvex conic optimization [77.8485863487028]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Mutually unbiased maximally entangled bases from difference matrices [0.0]
Based on maximally entangled states, we explore the constructions of mutually unbiased bases in bipartite quantum systems.
We establish $q$ mutually unbiased bases with $q-1$ maximally entangled bases and one product basis in $mathbbCqotimes mathbbCq$ for arbitrary prime power $q$.
arXiv Detail & Related papers (2022-10-04T10:45:22Z) - Novel Constructions of Mutually Unbiased Tripartite Absolutely Maximally
Entangled Bases [1.8065361710947974]
We first explore the tripartite absolutely maximally entangled bases and mutually unbiased bases in $mathbbCd otimes mathbbCd$
We then generalize the approach to the case of $mathbbCd_1 otimes mathbbCd_2 otimes mathbbCd_1d_1d_2$ by mutually weak Latin squares.
The concise direct constructions of mutually unbiased tripartite absolutely maximally entangled bases are
arXiv Detail & Related papers (2022-09-18T03:42:20Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Classification of four qubit states and their stabilisers under SLOCC
operations [0.0]
We classify the orbits of the group $mathrmmathopSL (2,mathbbC)4$ on the Hilbert space $mathcalH_4.
We also present a complete and irredundant classification of elements and stabilisers up to the action of $rm Sym_4ltimesmathrmmathopSL (2,mathbbC)4$.
arXiv Detail & Related papers (2021-11-10T02:20:52Z) - On a class of $k$-entanglement witnesses [0.0]
Recently, Yang at al. showed that each 2-positive map acting from $mathcalM_3(mathbbC)$ into itself is decomposable.
We construct a positive maps between matrix algebras whose $k$-positivity properties can be easily controlled.
arXiv Detail & Related papers (2021-04-29T00:46:58Z) - On the state space structure of tripartite quantum systems [0.22741525908374005]
It has been shown that the set of states separable across all the three bipartitions [say $mathcalBint(ABC)$] is a strict subset of the set of states having positive partial transposition (PPT) across the three bipartite cuts [say $mathcalPint(ABC)$]
The claim is proved by constructing state belonging to the set $mathPint(ABC)$ but not belonging to $mathcalBint(ABC)$.
arXiv Detail & Related papers (2021-04-14T16:06:58Z) - Signed Graph Metric Learning via Gershgorin Disc Perfect Alignment [46.145969174332485]
We propose a fast general metric learning framework that is entirely projection-free.
We replace the PD cone constraint in the metric learning problem with possible linear constraints per distances.
Experiments show that our graph metric optimization is significantly faster than cone-projection schemes.
arXiv Detail & Related papers (2020-06-15T23:15:12Z) - Graph Metric Learning via Gershgorin Disc Alignment [46.145969174332485]
We propose a fast general projection-free metric learning framework, where the objective $min_textbfM in mathcalS$ is a convex differentiable function of the metric matrix $textbfM$.
We prove that the Gershgorin discs can be aligned perfectly using the first eigenvector $textbfv$ of $textbfM$.
Experiments show that our efficiently computed graph metric matrices outperform metrics learned using competing methods in terms of classification tasks.
arXiv Detail & Related papers (2020-01-28T17:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.