Distributed Quantum-classical Hybrid Shor's Algorithm
- URL: http://arxiv.org/abs/2304.12100v1
- Date: Mon, 24 Apr 2023 13:52:05 GMT
- Title: Distributed Quantum-classical Hybrid Shor's Algorithm
- Authors: Ligang Xiao, Daowen Qiu, Le Luo, Paulo Mateus
- Abstract summary: Shor's algorithm is considered as one of the most significant quantum algorithms of the Noisy Intermediate Quantum era.
To reduce the resources required for Shor's algorithm, we propose a new distributed quantum-classical hybrid order-finding algorithm.
- Score: 1.7396274240172125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Shor's algorithm, which was proposed by Peter Shor [Proceedings of the 35th
Annual Symposium on Foundations of Computer Science, 1994, pp. 124--134], is
considered as one of the most significant quantum algorithms. Shor's algorithm
can factor large integers with a certain probability of success in polynomial
time. However, Shor's algorithm requires an unbearable amount of qubits and
circuit depth in the NISQ (Noisy Intermediate-scale Quantum) era. To reduce the
resources required for Shor's algorithm, we propose a new distributed
quantum-classical hybrid order-finding algorithm for Shor's algorithm. The
traditional order-finding algorithm needs to obtain an estimation of some
$\dfrac{s}{r}$, where $r$ is the ``order'' and $s\in\{0,1,\cdots,r-1\}$. In our
distributed algorithm, we use $k$ computers to estimate partial bits of
$\dfrac{s}{r}$ separately. In order to reduce the errors of measuring results
of these computers, we use classical programs to correct the measuring results
of each computer to a certain extent. Compared with the traditional Shor's
algorithm, our algorithm reduces nearly $(1-\dfrac{1}{k})L-\log_2k$ qubits for
each computer when factoring an $L$-bit integer. Also, our algorithm reduces
gate complexity and circuit depth to some extent for each computer. The
communication complexity of our algorithm is $O(kL)$.
Related papers
- Distributed exact quantum algorithms for Bernstein-Vazirani and search
problems [9.146620606615889]
We give a distributed Bernstein-Vazirani algorithm (DBVA) with $t$ computing nodes, and a distributed exact Grover's algorithm (DEGA) that solve the search problem with only one target item in the unordered databases.
We provide situations of our DBVA and DEGA on MindQuantum (a quantum software) to validate the correctness and practicability of our methods.
arXiv Detail & Related papers (2023-03-19T14:18:49Z) - An adaptive Bayesian quantum algorithm for phase estimation [0.0]
We present a coherence-based phase-estimation algorithm which can achieve the optimal quadratic scaling in the mean absolute error and the mean squared error.
In the presence of noise, our algorithm produces errors that approach the theoretical lower bound.
arXiv Detail & Related papers (2023-03-02T19:00:01Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Distributed Shor's algorithm [1.7396274240172125]
Shor's algorithm is one of the most important quantum algorithms proposed by Peter Shor.
We use two quantum computers separately to estimate partial bits of $dfracsr$ for some $sin0, 1, cdots, r-1$.
Compared with the traditional Shor's algorithm that uses multiple controlling qubits, our algorithm reduces nearly $dfracL2$ qubits and reduces the circuit depth of each computer.
arXiv Detail & Related papers (2022-07-13T06:00:03Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - An integer factorization algorithm which uses diffusion as a
computational engine [0.0]
We develop an algorithm which computes a divisor of an integer $N$, which is assumed to be neither prime nor the power of a prime.
By comparison, Shor's algorithm is known to use at most $O(log N)2log (log N) log (log log N)$ quantum steps on a quantum computer.
arXiv Detail & Related papers (2021-04-23T14:11:33Z) - Low depth algorithms for quantum amplitude estimation [6.148105657815341]
We design and analyze two new low depth algorithms for amplitude estimation (AE)
These algorithms bring quantum speedups for Monte Carlo methods closer to realization.
arXiv Detail & Related papers (2020-12-06T18:39:20Z) - A Polynomial Time Algorithm for Learning Halfspaces with Tsybakov Noise [55.45544638410858]
We study the problem of PAC learning homogeneous halfspaces in the presence of Tsybakov noise.
Our algorithm learns the true halfspace within any accuracy $epsilon$.
arXiv Detail & Related papers (2020-10-04T22:19:06Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.