Fast Quantum Amplitude Encoding of Typical Classical Data
- URL: http://arxiv.org/abs/2503.17113v2
- Date: Fri, 28 Mar 2025 19:50:05 GMT
- Title: Fast Quantum Amplitude Encoding of Typical Classical Data
- Authors: Vittorio Pagni, Sigurd Huber, Michael Epping, Michael Felderer,
- Abstract summary: We present an improved version of a quantum amplitude encoding scheme that encodes the $N$ entries of a unit classical vector.<n>We prove that the average runtime is $mathcalO(sqrtNlog N)$ for uniformly random inputs.<n>We present numerical evidence that this favourable runtime behaviour also holds for real-world data, such as radar satellite images.
- Score: 3.294877984684448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an improved version of a quantum amplitude encoding scheme that encodes the $N$ entries of a unit classical vector $\vec{v}=(v_1,..,v_N)$ into the amplitudes of a quantum state. Our approach has a quadratic speed-up with respect to the original one. We also describe several generalizations, including to complex entries of the input vector and a parameter $M$ that determines the parallelization. The number of qubits required for the state preparation scales as $\mathcal{O}(M\log N)$. The runtime, which depends on the data density $\rho$ and on the parallelization paramater $M$, scales as $\mathcal{O}(\frac{1}{\sqrt{\rho}}\frac{N}{M}\log (M+1))$, which in the most parallel version ($M=N$) is always less than $\mathcal{O}(\sqrt{N}\log N)$. By analysing the data density, we prove that the average runtime is $\mathcal{O}(\log^{1.5} N)$ for uniformly random inputs. We present numerical evidence that this favourable runtime behaviour also holds for real-world data, such as radar satellite images. This is promising as it allows for an input-to-output advantage of the quantum Fourier transform.
Related papers
- Quantum algorithm for the gradient of a logarithm-determinant [0.0]
The inverse of a sparse-rank input operator may be determined efficiently.
Measuring an expectation value of the quantum state--instead of all $N2$ elements of the input operator--can be accomplished in $O(ksigma)$ time.
The algorithm is envisioned for fully error-corrected quantum computers but may be implementable on near-term machines.
arXiv Detail & Related papers (2025-01-16T09:39:31Z) - Quantum Advantage via Efficient Post-processing on Qudit Shadow tomography [3.19428095493284]
inner products of the form ( operatornametr(AB) ) are fundamental across quantum science and artificial intelligence.
We introduce a quantum approach based on qudit classical shadow tomography, significantly reducing computational complexity.
Our results establish a practical and modular quantum subroutine, enabling scalable quantum advantages in tasks involving high-dimensional data analysis and processing.
arXiv Detail & Related papers (2024-08-29T03:56:16Z) - Simple and Nearly-Optimal Sampling for Rank-1 Tensor Completion via Gauss-Jordan [49.1574468325115]
We revisit the sample and computational complexity of completing a rank-1 tensor in $otimes_i=1N mathbbRd$.
We present a characterization of the problem which admits an algorithm amounting to Gauss-Jordan on a pair of random linear systems.
arXiv Detail & Related papers (2024-08-10T04:26:19Z) - Towards Optimal Circuit Size for Sparse Quantum State Preparation [10.386753939552872]
We consider the preparation for $n$-qubit sparse quantum states with $s$ non-zero amplitudes and propose two algorithms.
The first algorithm uses $O(ns/log n + n)$ gates, improving upon previous methods by $O(log n)$.
The second algorithm is tailored for binary strings that exhibit a short Hamiltonian path.
arXiv Detail & Related papers (2024-04-08T02:13:40Z) - Spacetime-Efficient Low-Depth Quantum State Preparation with
Applications [93.56766264306764]
We show that a novel deterministic method for preparing arbitrary quantum states requires fewer quantum resources than previous methods.
We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations.
arXiv Detail & Related papers (2023-03-03T18:23:20Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - A lower bound on the space overhead of fault-tolerant quantum computation [51.723084600243716]
The threshold theorem is a fundamental result in the theory of fault-tolerant quantum computation.
We prove an exponential upper bound on the maximal length of fault-tolerant quantum computation with amplitude noise.
arXiv Detail & Related papers (2022-01-31T22:19:49Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Low-depth Quantum State Preparation [3.540171881768714]
A crucial subroutine in quantum computing is to load the classical data of $N$ complex numbers into the amplitude of a superposed $n=lceil logNrceil$-qubit state.
Here we investigate this space-time tradeoff in quantum state preparation with classical data.
We propose quantum algorithms with $mathcal O(n2)$ circuit depth to encode any $N$ complex numbers.
arXiv Detail & Related papers (2021-02-15T13:11:43Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.