Success probability in Shor's Algorithm
- URL: http://arxiv.org/abs/2505.00433v1
- Date: Thu, 01 May 2025 10:08:58 GMT
- Title: Success probability in Shor's Algorithm
- Authors: Ali Abbassi, Lionel Bayle,
- Abstract summary: This paper aims to determine the exact success probability at each step of Shor's algorithm.<n>The derived formulas enable the identification of all failure cases in Shor's algorithm, which correspond to a success probability of zero.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to determine the exact success probability at each step of Shor's algorithm. Although the literature usually provides a lower bound on this probability, we present an improved bound. The derived formulas enable the identification of all failure cases in Shor's algorithm, which correspond to a success probability of zero. A simulation routine is provided to evaluate the theoretical success probability for a given integer when its prime factorization is known with potential applications in quantum resource estimation and algorithm benchmarking.
Related papers
- Asymptotically Optimal Linear Best Feasible Arm Identification with Fixed Budget [55.938644481736446]
We introduce a novel algorithm for best feasible arm identification that guarantees an exponential decay in the error probability.<n>We validate our algorithm through comprehensive empirical evaluations across various problem instances with different levels of complexity.
arXiv Detail & Related papers (2025-06-03T02:56:26Z) - Deterministic Quantum Search for Arbitrary Initial Success Probabilities [0.0]
This work presents a deterministic quantum search algorithm that operates effectively for arbitrary initial success probabilities.<n>The proposed approach introduces at most one additional iteration beyond the optimal count required by probabilistic quantum search algorithms.
arXiv Detail & Related papers (2025-05-21T13:35:42Z) - On Policy Evaluation Algorithms in Distributional Reinforcement Learning [0.0]
We introduce a novel class of algorithms to efficiently approximate the unknown return distributions in policy evaluation problems from distributional reinforcement learning (DRL)
For a plain instance of our proposed class of algorithms we prove error bounds, both within Wasserstein and Kolmogorov--Smirnov distances.
For return distributions having probability density functions the algorithms yield approximations for these densities; error bounds are given within supremum norm.
arXiv Detail & Related papers (2024-07-19T10:06:01Z) - Quadratic acceleration of multi-step probabilistic algorithms for state
preparation [0.0]
A non-unitary operator is typically designed to decay undesirable states contained in an initial state.
Probabilistic algorithms do not accelerate the computational process compared to classical ones.
arXiv Detail & Related papers (2023-08-07T14:07:03Z) - Multivariate Systemic Risk Measures and Computation by Deep Learning
Algorithms [63.03966552670014]
We discuss the key related theoretical aspects, with a particular focus on the fairness properties of primal optima and associated risk allocations.
The algorithms we provide allow for learning primals, optima for the dual representation and corresponding fair risk allocations.
arXiv Detail & Related papers (2023-02-02T22:16:49Z) - Reliability analysis of discrete-state performance functions via
adaptive sequential sampling with detection of failure surfaces [0.0]
The paper presents a new efficient and robust method for rare event probability estimation.
The method can estimate the probabilities of multiple failure types.
It can accommodate this information to increase the accuracy of the estimated probabilities.
arXiv Detail & Related papers (2022-08-04T05:59:25Z) - The vacuum provides quantum advantage to otherwise simulatable
architectures [49.1574468325115]
We consider a computational model composed of ideal Gottesman-Kitaev-Preskill stabilizer states.
We provide an algorithm to calculate the probability density function of the measurement outcomes.
arXiv Detail & Related papers (2022-05-19T18:03:17Z) - Probabilistic learning constrained by realizations using a weak
formulation of Fourier transform of probability measures [0.0]
This paper deals with the taking into account a given set of realizations as constraints in the Kullback-Leibler minimum principle.
A functional approach is developed on the basis of a weak formulation of the Fourier transform of probability measures.
The presented application in high dimension demonstrates the efficiency and the robustness of the proposed algorithm.
arXiv Detail & Related papers (2022-05-06T08:54:57Z) - Benchmarks for quantum computers from Shor's algorithm [0.0]
Properties of Shor's algorithm and the related period-finding algorithm could serve as benchmarks for the operation of a quantum computer.
Distinctive universal behaviour is expected for the probability for success of the period-finding as the input quantum register is increased.
arXiv Detail & Related papers (2021-11-27T09:46:16Z) - Random-reshuffled SARAH does not need a full gradient computations [61.85897464405715]
The StochAstic Recursive grAdientritHm (SARAH) algorithm is a variance reduced variant of the Gradient Descent (SGD) algorithm.
In this paper, we remove the necessity of a full gradient.
The aggregated gradients serve as an estimate of a full gradient in the SARAH algorithm.
arXiv Detail & Related papers (2021-11-26T06:00:44Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.