Robustness of different modifications of Grovers algorithm based on
generalized Householder reflections with different phases
- URL: http://arxiv.org/abs/2401.03602v1
- Date: Sun, 7 Jan 2024 23:04:39 GMT
- Title: Robustness of different modifications of Grovers algorithm based on
generalized Householder reflections with different phases
- Authors: Hristo Tonchev, Petar Danev
- Abstract summary: We study five Grovers algorithm modifications, where each iteration is constructed by two generalized Householder reflections.
By using semi-empirical methods, we investigate various characteristics of the dependence between the probability to find solution and the phase errors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we study five Grovers algorithm modifications, where each
iteration is constructed by two generalized Householder reflections, against
inaccuracies in the phases. By using semi-empirical methods, we investigate
various characteristics of the dependence between the probability to find
solution and the phase errors. The first of them is the robustness of the
probability to errors in the phase. The second one is how quickly the
probability falls beyond the stability interval. And finally, the average
success rate of the algorithm when the parameters are in the range of the
highly robust interval. Two of the modifications require usage of the same
Grover operator each iteration and in the other three it differs. Those
semi-empirical methods give us the, tool to make prediction of the quantum
algorithm modifications overall behavior and compare them for even larger
register size
Related papers
- A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Robustness of Quantum Random Walk Search with multi-phase matching [0.0]
We show that usage of a particular sequence of phases can make the algorithm more robust even if there is no preserved connection between the phases in the traversing coin.
arXiv Detail & Related papers (2024-04-07T21:52:35Z) - Batch Bayesian Optimization for Replicable Experimental Design [56.64902148159355]
Many real-world design problems evaluate multiple experimental conditions in parallel and replicate each condition multiple times due to large and heteroscedastic observation noise.
We propose the Batch Thompson Sampling for Replicable Experimental Design framework, which encompasses three algorithms.
We show the effectiveness of our algorithms in two practical real-world applications: precision agriculture and AutoML.
arXiv Detail & Related papers (2023-11-02T12:46:03Z) - Robustness of Quantum Random Walk Search Algorithm in Hypercube when
only first or both first and second neighbors are measured [0.0]
We study the robustness of two modifications of quantum random walk search algorithm on hypercube.
The most important one, in view of our study of the robustness, is an increase in the stability of the algorithm, especially for large coin dimensions.
arXiv Detail & Related papers (2023-05-24T11:55:52Z) - Amplitude Amplification for Optimization via Subdivided Phase Oracle [0.9176056742068814]
We propose an algorithm using a modified variant of amplitude amplification to solve optimization problems.
We show via numerical simulation that for normal, skew normal, and exponential distribution of objective values, the algorithm can be used to amplify the probability of measuring the optimal solution.
arXiv Detail & Related papers (2022-05-02T01:14:27Z) - Deterministic Grover search with a restricted oracle [2.976027966501599]
Grover's quantum search algorithm provides a quadratic quantum advantage over classical algorithms.
We present a modified version to return the correct result with certainty without having user control over the quantum search oracle.
arXiv Detail & Related papers (2022-01-01T02:04:11Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Optimizing the walk coin in the quantum random walk search algorithm
through machine learning [0.0]
This paper examines the stability of the quantum random walk search algorithm, when the walk coin is constructed by generalized Householder reflection and additional phase shift.
The optimization of the algorithm is done by numerical methods - Monte Carlo, neural networks, and supervised machine learning.
arXiv Detail & Related papers (2021-05-17T17:13:49Z) - Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality [131.45028999325797]
We develop a doubly robust off-policy AC (DR-Off-PAC) for discounted MDP.
DR-Off-PAC adopts a single timescale structure, in which both actor and critics are updated simultaneously with constant stepsize.
We study the finite-time convergence rate and characterize the sample complexity for DR-Off-PAC to attain an $epsilon$-accurate optimal policy.
arXiv Detail & Related papers (2021-02-23T18:56:13Z) - Semi-Supervised Learning with Variational Bayesian Inference and Maximum
Uncertainty Regularization [62.21716612888669]
We propose two generic methods for improving semi-supervised learning (SSL)
The first integrates weight perturbation (WP) into existing "consistency regularization" (CR) based methods.
The second method proposes a novel consistency loss called "maximum uncertainty regularization" (MUR)
arXiv Detail & Related papers (2020-12-03T09:49:35Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.