A simpler Gaussian state-preparation
- URL: http://arxiv.org/abs/2508.03987v1
- Date: Wed, 06 Aug 2025 00:40:28 GMT
- Title: A simpler Gaussian state-preparation
- Authors: Parker Kuklinski, Benjamin Rempfer, Kevin Obenland, Justin Elenewski,
- Abstract summary: The ability to efficiently state-prepare Gaussian distributions is critical to the success of quantum algorithms.<n>We present a new, more intuitive method which uses exactly $n-1 rotations, $(n-2)(n-2) two-qubit controlled rotations, and $lfloor(n-1)/2rfloor$ ancilla to state-prepare an $n$-qubit Gaussian state.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to efficiently state-prepare Gaussian distributions is critical to the success of numerous quantum algorithms. The most popular algorithm for this subroutine (Kitaev-Webb) has favorable polynomial resource scaling, however it faces enormous resource overheads making it functionally impractical. In this paper, we present a new, more intuitive method which uses exactly $n-1$ rotations, $(n-1)(n-2)/2$ two-qubit controlled rotations, and $\lfloor(n-1)/2\rfloor$ ancilla to state-prepare an $n$-qubit Gaussian state. We then apply optimizations to the circuit to render it linear in T-depth. This method can be extended to state-preparations of complex functions with polynomial phase.
Related papers
- Fast Minimization of Expected Logarithmic Loss via Stochastic Dual
Averaging [8.990961435218544]
We propose a first-order algorithm named $B$-sample dual averaging with the logarithmic barrier.
For the Poisson inverse problem, our algorithm attains an $varepsilon$ solution in $smashtildeO(d3/varepsilon2)$ time.
When computing the maximum-likelihood estimate for quantum state tomography, our algorithm yields an $varepsilon$-optimal solution in $smashtildeO(d3/varepsilon2)$ time.
arXiv Detail & Related papers (2023-11-05T03:33:44Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Quantum Simulation of the First-Quantized Pauli-Fierz Hamiltonian [0.5097809301149342]
We show that a na"ive partitioning and low-order splitting formula can yield, through our divide and conquer formalism, superior scaling to qubitization for large $Lambda$.
We also give new algorithmic and circuit level techniques for gate optimization including a new way of implementing a group of multi-controlled-X gates.
arXiv Detail & Related papers (2023-06-19T23:20:30Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Learning Polynomial Transformations [41.30404402331856]
We consider the problem of learning high dimensional quadratic transformations of Gaussians.
Our results extend to any-invariant input distribution, not just Gaussian.
We also give the first decomposition-time algorithms with provable guarantees for tensor ring decomposition.
arXiv Detail & Related papers (2022-04-08T17:59:31Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Quantum Algorithms for Ground-State Preparation and Green's Function
Calculation [5.28670135448572]
We present projective quantum algorithms for ground-state preparation and calculations of the many-body Green's functions in frequency domain.
The algorithms are based on the linear combination of unitary (LCU) operations and essentially only use quantum resources.
arXiv Detail & Related papers (2021-12-10T18:39:55Z) - Gaussian Process Bandit Optimization with Few Batches [49.896920704012395]
We introduce a batch algorithm inspired by finite-arm bandit algorithms.
We show that it achieves the cumulative regret upper bound $Oast(sqrtTgamma_T)$ using $O(loglog T)$ batches within time horizon $T$.
In addition, we propose a modified version of our algorithm, and characterize how the regret is impacted by the number of batches.
arXiv Detail & Related papers (2021-10-15T00:54:04Z) - 2nd-order Updates with 1st-order Complexity [0.0]
It has long been a goal to efficiently compute and use second order information on a function ($f$) to assist in numerical approximations.
Here it is shown how, using only basic physics and a numerical approximation, such information can be accurately obtained at a cost of $cal O(N)$ complexity.
arXiv Detail & Related papers (2021-05-24T17:47:51Z) - Provably Efficient Reinforcement Learning for Discounted MDPs with
Feature Mapping [99.59319332864129]
In this paper, we study reinforcement learning for discounted Decision (MDP)
We propose a novel algorithm that makes use of the feature mapping and obtains a $tilde O(dsqrtT/ (1-gamma)2)$ regret.
Our upper and lower bound results together suggest that the proposed reinforcement learning algorithm is near-optimal up to a $ (1-gamma)-0.5$ factor.
arXiv Detail & Related papers (2020-06-23T17:08:54Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.