Superior resilience of non-Gaussian entanglement against local Gaussian
noises
- URL: http://arxiv.org/abs/2212.14745v1
- Date: Fri, 30 Dec 2022 14:38:05 GMT
- Title: Superior resilience of non-Gaussian entanglement against local Gaussian
noises
- Authors: Sergey Filippov, Alena Termanova
- Abstract summary: We prove that specific non-Gaussian two-mode states remain entangled under the effect of deterministic local attenuation or amplification.
These results shift the Gaussian world'' paradigm in quantum information science.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entanglement distribution task encounters a problem of how the initial
entangled state should be prepared in order to remain entangled the longest
possible time when subjected to local noises. In the realm of
continuous-variable states and local Gaussian channels it is tempting to assume
that the optimal initial state with the most robust entanglement is Gaussian
too; however, this is not the case. Here we prove that specific non-Gaussian
two-mode states remain entangled under the effect of deterministic local
attenuation or amplification (Gaussian channels with the attenuation
factor/power gain $\kappa_i$ and the noise parameter $\mu_i$ for modes $i=1,2$)
whenever $\kappa_1 \mu_2^2 + \kappa_2 \mu_1^2 < \frac{1}{4}(\kappa_1 +
\kappa_2) (1 + \kappa_1 \kappa_2)$, which is a strictly larger area of
parameters as compared to where Gaussian entanglement is able to tolerate
noise. These results shift the ``Gaussian world'' paradigm in quantum
information science (within which solutions to optimization problems involving
Gaussian channels are supposed to be attained at Gaussian states).
Related papers
- Gaussian unsteerable channels and computable quantifications of Gaussian steering [2.3000719681099735]
Current quantum resource theory for Gaussian steering for continuous-variable systems is flawed and incomplete.
We introduce the class of the Gaussian unsteerable channels and the class of maximal Gaussian unsteerable channels.
We also propose two quantifications $mathcalJ_jj$ of $(m+n)$-mode Gaussian steering from $A$ to $B$.
arXiv Detail & Related papers (2024-09-02T00:32:02Z) - Revisiting Convergence of AdaGrad with Relaxed Assumptions [4.189643331553922]
We revisit the convergence of AdaGrad with momentum (covering AdaGrad as a special case) on problems.
This model encompasses a broad range noises including sub-auau in many practical applications.
arXiv Detail & Related papers (2024-02-21T13:24:14Z) - Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems [56.86067111855056]
We consider clipped optimization problems with heavy-tailed noise with structured density.
We show that it is possible to get faster rates of convergence than $mathcalO(K-(alpha - 1)/alpha)$, when the gradients have finite moments of order.
We prove that the resulting estimates have negligible bias and controllable variance.
arXiv Detail & Related papers (2023-11-07T17:39:17Z) - Matched entanglement witness criteria for continuous variables [11.480994804659908]
We use quantum entanglement witnesses derived from Gaussian operators to study the separable criteria of continuous variable states.
This opens a way for precise detection of non-Gaussian entanglement.
arXiv Detail & Related papers (2022-08-26T03:45:00Z) - Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [116.89941263390769]
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $min_mathbfxmax_mathbfyF(mathbfx) + H(mathbfx,mathbfy)$, where one has access to first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.
We present a emphaccelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragrad
arXiv Detail & Related papers (2022-06-17T06:10:20Z) - High-Order Qubit Dephasing at Sweet Spots by Non-Gaussian Fluctuators:
Symmetry Breaking and Floquet Protection [55.41644538483948]
We study the qubit dephasing caused by the non-Gaussian fluctuators.
We predict a symmetry-breaking effect that is unique to the non-Gaussian noise.
arXiv Detail & Related papers (2022-06-06T18:02:38Z) - Random quantum circuits transform local noise into global white noise [118.18170052022323]
We study the distribution over measurement outcomes of noisy random quantum circuits in the low-fidelity regime.
For local noise that is sufficiently weak and unital, correlations (measured by the linear cross-entropy benchmark) between the output distribution $p_textnoisy$ of a generic noisy circuit instance shrink exponentially.
If the noise is incoherent, the output distribution approaches the uniform distribution $p_textunif$ at precisely the same rate.
arXiv Detail & Related papers (2021-11-29T19:26:28Z) - Quantum illumination with noisy probes: Conditional advantages of non-Gaussianity [0.9999629695552195]
Entangled states, like the two-mode squeezed vacuum state, are known to give quantum advantage in the illumination protocol.
We use non-Gaussian photon-added and -subtracted states, affected by local Gaussian noise on top of the omnipresent thermal noise, as probes in the illumination protocol.
arXiv Detail & Related papers (2021-07-06T17:37:45Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Fundamental limitations to key distillation from Gaussian states with
Gaussian operations [4.642647756403864]
We prove that the key is bounded by the R'enyi-$2$ entanglement of formation $E_F,2mathrmscriptscriptstyle G$.
We conjecture that the factor of $2$ is spurious, which would imply that $E_F,2mathrmscriptscriptstyle G$ coincides with the secret key rate of Gaussian states.
arXiv Detail & Related papers (2020-10-29T16:26:46Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.