Novel oracle constructions for quantum random access memory
- URL: http://arxiv.org/abs/2405.20225v2
- Date: Thu, 13 Jun 2024 09:34:47 GMT
- Title: Novel oracle constructions for quantum random access memory
- Authors: Ákos Nagy, Cindy Zhang,
- Abstract summary: We present new designs for quantum random access memory.
We construct oracles, $mathcalO_f$, with the property beginequation mathcalO_f left| x rightrangle_n left| 0 rightrangle_d = left| x rightrangle_n left| f(x) rightrangle_d.
- Score: 0.6445605125467572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present new designs for quantum random access memory. More precisely, for each function, $f : \mathbb{F}_2^n \rightarrow \mathbb{F}_2^d$, we construct oracles, $\mathcal{O}_f$, with the property \begin{equation} \mathcal{O}_f \left| x \right\rangle_n \left| 0 \right\rangle_d = \left| x \right\rangle_n \left| f(x) \right\rangle_d. \end{equation} Our methods are based on the Walsh-Hadamard Transform of $f$, viewed as an integer valued function. In general, the complexity of our method scales with the sparsity of the Walsh-Hadamard Transform and not the sparsity of $f$, yielding more favorable constructions in cases such as binary optimization problems and function with low-degree Walsh-Hadamard Transforms. Furthermore, our design comes with a tuneable amount of ancillas that can trade depth for size. In the ancilla-free design, these oracles can be $\epsilon$-approximated so that the Clifford + $T$ depth is $O \left( \left( n + \log_2 \left( \tfrac{d}{\epsilon} \right) \right) \mathcal{W}_f \right)$, where $\mathcal{W}_f$ is the number of nonzero components in the Walsh-Hadamard Transform. The depth of the shallowest version is $O \left( n + \log_2 \left( \tfrac{d}{\epsilon} \right) \right)$, using $n + d \mathcal{W}_f$ qubit. The connectivity of these circuits is also only logarithmic in $\mathcal{W}_f$. As an application, we show that for boolean functions with low approximate degrees (as in the case of read-once formulas) the complexities of the corresponding QRAM oracles scale only as $2^{\widetilde{O} \left( \sqrt{n} \log_2 \left( n \right) \right)}$.
Related papers
- Non-Convex Tensor Recovery from Local Measurements [22.688595434116777]
Motivated by the settings where sensing the entire tensor is infeasible, this paper proposes a novel tensor compressed sensing model, where measurements are only obtained from each iteration via a gradient.
We prove that ScaleAlt-PGD-Alt-DMin achieves $mathcal O(log frac1epsilon)$ and improves the sample to $mathcal Oleft.
arXiv Detail & Related papers (2024-12-23T05:04:56Z) - The Space Complexity of Approximating Logistic Loss [11.338399194998933]
We prove a general $tildeOmega(dcdot mu_mathbfy(mathbfX))$ space lower bound when $epsilon$ is constant.
We also refute a prior conjecture that $mu_mathbfy(mathbfX)$ is hard to compute.
arXiv Detail & Related papers (2024-12-03T18:11:37Z) - The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Tight Lower Bounds under Asymmetric High-Order Hölder Smoothness and Uniform Convexity [6.972653925522813]
We provide tight lower bounds for the oracle complexity of minimizing high-order H"older smooth and uniformly convex functions.
Our analysis generalizes previous lower bounds for functions under first- and second-order smoothness as well as those for uniformly convex functions.
arXiv Detail & Related papers (2024-09-16T23:17:33Z) - Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - Partially Unitary Learning [0.0]
An optimal mapping between Hilbert spaces $IN$ of $left|psirightrangle$ and $OUT$ of $left|phirightrangle$ is presented.
An iterative algorithm for finding the global maximum of this optimization problem is developed.
arXiv Detail & Related papers (2024-05-16T17:13:55Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Fixed-point Grover Adaptive Search for Quadratic Binary Optimization Problems [0.6524460254566903]
We study a Grover-type method for Quadratic Unconstrained Binary Optimization (QUBO) problems.
We construct a marker oracle for such problems with a tuneable parameter, $Lambda in left[ 1, m right] cap mathbbZ$.
For all values of $Lambda$, the non-Clifford gate count of these oracles is strictly lower.
Some of our results are novel and useful for any method based on Fixed-point Search.
arXiv Detail & Related papers (2023-11-09T18:49:01Z) - Extending Matchgate Simulation Methods to Universal Quantum Circuits [4.342241136871849]
Matchgates are a family of parity-preserving two-qubit gates, nearest-neighbour circuits of which are known to be classically simulable in time.
We present a simulation method to simulate an $boldsymboln$-qubit circuit containing $boldsymbolN$ gates and $boldsymbolN-m$ of which are matchgates.
arXiv Detail & Related papers (2023-02-06T09:50:16Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Matrix concentration inequalities and efficiency of random universal
sets of quantum gates [0.0]
For a random set $mathcalS subset U(d)$ of quantum gates we provide bounds on the probability that $mathcalS$ forms a $delta$-approximate $t$-design.
We show that for $mathcalS$ drawn from an exact $t$-design the probability that it forms a $delta$-approximate $t$-design satisfies the inequality $mathbbPleft(delta geq x right)leq 2D_t
arXiv Detail & Related papers (2022-02-10T23:44:09Z) - Linear Bandits on Uniformly Convex Sets [88.3673525964507]
Linear bandit algorithms yield $tildemathcalO(nsqrtT)$ pseudo-regret bounds on compact convex action sets.
Two types of structural assumptions lead to better pseudo-regret bounds.
arXiv Detail & Related papers (2021-03-10T07:33:03Z) - Thresholded Lasso Bandit [70.17389393497125]
Thresholded Lasso bandit is an algorithm that estimates the vector defining the reward function as well as its sparse support.
We establish non-asymptotic regret upper bounds scaling as $mathcalO( log d + sqrtT )$ in general, and as $mathcalO( log d + sqrtT )$ under the so-called margin condition.
arXiv Detail & Related papers (2020-10-22T19:14:37Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.