Scalable implementation of $(d+1)$ mutually unbiased bases for
$d$-dimensional quantum key distribution
- URL: http://arxiv.org/abs/2204.02691v2
- Date: Fri, 2 Sep 2022 06:20:00 GMT
- Title: Scalable implementation of $(d+1)$ mutually unbiased bases for
$d$-dimensional quantum key distribution
- Authors: Takuya Ikuta, Seiseki Akibue, Yuya Yonezu, Toshimori Honjo, Hiroki
Takesue, Kyo Inoue
- Abstract summary: A high-dimensional quantum key distribution (QKD) can improve error rate tolerance and the secret key rate.
Many $d$-dimensional QKDs have used two mutually unbiased bases (MUBs)
We propose a scalable and general implementation of $(d+1)$ MUBs using $log_p d$ interferometers in prime power dimensions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A high-dimensional quantum key distribution (QKD) can improve error rate
tolerance and the secret key rate. Many $d$-dimensional QKDs have used two
mutually unbiased bases (MUBs), while $(d+1)$ MUBs enable a more robust QKD,
especially against correlated errors. However, a scalable implementation has
not been achieved because the setups have required $d$ devices even for two
MUBs or a flexible convertor for a specific optical mode. Here, we propose a
scalable and general implementation of $(d+1)$ MUBs using $\log_p d$
interferometers in prime power dimensions $d=p^N$. We implemented the setup for
time-bin states and observed an average error rate of 3.8% for phase bases,
which is lower than the 23.17% required for a secure QKD against coherent
attack in $d=4$.
Related papers
- High-dimensional quantum key distribution rates for multiple measurement bases [0.0]
We investigate the advantages of high-dimensional encoding for a quantum key distribution protocol.
In particular, we address a BBM92-like protocol where the dimension of the systems can be larger than two.
arXiv Detail & Related papers (2025-01-10T11:42:59Z) - Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias [13.642712817536072]
We show that as the dimension $d$ of the problem increases, the number of iterations required to ensure convergence within a desired error increases.
A key technical challenge we address is the lack of a one-step contraction property in the $W_2,ellinfty$ metric to measure convergence.
arXiv Detail & Related papers (2024-08-20T01:24:54Z) - Computational Supremacy of Quantum Eigensolver by Extension of Optimized Binary Configurations [0.0]
We develop a quantum eigensolver based on a D-Wave Quantum Annealer (D-Wave QA)
This approach performs iterative QA measurements to optimize the eigenstates $vert psi rangle$ without the derivation of a classical computer.
We confirm that the proposed QE algorithm provides exact solutions within the errors of $5 times 10-3$.
arXiv Detail & Related papers (2024-06-05T15:19:53Z) - Mind the Gap: A Causal Perspective on Bias Amplification in Prediction & Decision-Making [58.06306331390586]
We introduce the notion of a margin complement, which measures how much a prediction score $S$ changes due to a thresholding operation.
We show that under suitable causal assumptions, the influences of $X$ on the prediction score $S$ are equal to the influences of $X$ on the true outcome $Y$.
arXiv Detail & Related papers (2024-05-24T11:22:19Z) - Matching the Statistical Query Lower Bound for $k$-Sparse Parity Problems with Sign Stochastic Gradient Descent [83.85536329832722]
We solve the $k$-sparse parity problem with sign gradient descent (SGD) on two-layer fully-connected neural networks.
We show that this approach can efficiently solve the $k$-sparse parity problem on a $d$-dimensional hypercube.
We then demonstrate how a trained neural network with sign SGD can effectively approximate this good network, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - The Power of Unentangled Quantum Proofs with Non-negative Amplitudes [55.90795112399611]
We study the power of unentangled quantum proofs with non-negative amplitudes, a class which we denote $textQMA+(2)$.
In particular, we design global protocols for small set expansion, unique games, and PCP verification.
We show that QMA(2) is equal to $textQMA+(2)$ provided the gap of the latter is a sufficiently large constant.
arXiv Detail & Related papers (2024-02-29T01:35:46Z) - SQT -- std $Q$-target [47.3621151424817]
Std $Q$-target is a conservative, actor-critic, ensemble, $Q$-learning-based algorithm.
We implement SQT on top of TD3/TD7 code and test it against the state-of-the-art (SOTA) actor-critic algorithms.
Our results demonstrate SQT's $Q$-target formula superiority over TD3's $Q$-target formula as a conservative solution to overestimation bias in RL.
arXiv Detail & Related papers (2024-02-03T21:36:22Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - High-dimensional multi-input quantum random access codes and mutually
unbiased bases [0.0]
We present a general method to find the maximum success probability of $n(d)rightarrow1$ QRACs.
Based on the analytical solution, we show the relationship between MUBs and $n(d)rightarrow1$ QRACs.
arXiv Detail & Related papers (2021-11-17T03:53:39Z) - QKD parameter estimation by two-universal hashing [0.0]
This paper proposes and proves security of a QKD protocol which uses two-universal hashing instead of random sampling.
This protocol dramatically outperforms previous QKD protocols for small block sizes.
arXiv Detail & Related papers (2021-09-14T14:15:41Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.