Dictionary-based Block Encoding of Sparse Matrices with Low Subnormalization and Circuit Depth
- URL: http://arxiv.org/abs/2405.18007v6
- Date: Tue, 29 Jul 2025 01:38:06 GMT
- Title: Dictionary-based Block Encoding of Sparse Matrices with Low Subnormalization and Circuit Depth
- Authors: Chunlin Yang, Zexian Li, Hongmei Yao, Zhaobing Fan, Guofeng Zhang, Jianshe Liu,
- Abstract summary: We propose an efficient block-encoding protocol for sparse matrices based on a novel data structure.<n>Non-zero elements with the same values belong to the same classification in our block-encoding protocol's dictionary.<n>Our protocol connects to linear combinations of unitaries (LCU) and the sparse access input model (SAIM)
- Score: 2.4487770108795393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Block encoding severs as an important data input model in quantum algorithms, enabling quantum computers to simulate non-unitary operators effectively. In this paper, we propose an efficient block-encoding protocol for sparse matrices based on a novel data structure, called the dictionary data structure, which classifies all non-zero elements according to their values and indices. Non-zero elements with the same values, lacking common column and row indices, belong to the same classification in our block-encoding protocol's dictionary. When compiled into the \{\rm U(2), CNOT\} gate set, the protocol queries a $2^n \times 2^n$ sparse matrix with $s$ non-zero elements at a circuit depth of $\mathcal{O}(\log(ns))$, utilizing $\mathcal{O}(n^2s)$ ancillary qubits. This offers an exponential improvement in circuit depth relative to the number of system qubits, compared to existing methods~\cite{clader2022quantum,zhang2024circuit} with a circuit depth of $\mathcal{O}(n)$. Moreover, in our protocol, the subnormalization, a scaled factor that influences the measurement probability of ancillary qubits, is minimized to $\sum_{l=0}^{s_0}\vert A_l\vert$, where $s_0$ denotes the number of classifications in the dictionary and $A_l$ represents the value of the $l$-th classification. Furthermore, we show that our protocol connects to linear combinations of unitaries (LCU) and the sparse access input model (SAIM). To demonstrate the practical utility of our approach, we provide several applications, including Laplacian matrices in graph problems and discrete differential operators.
Related papers
- Learnable quantum spectral filters for hybrid graph neural networks [0.0]
We show that the eigenspace of the Laplacian operator of a graph can be approximated by using QFT based circuit.<n>For an $Ntimes N$ Laplacian, this approach yields an approximate-depth circuit requiring only $n=log(Nimat)$ qubits.<n>We then apply a classical neural network prediction head to the output of the circuit to construct a complete graph neural network.
arXiv Detail & Related papers (2025-07-08T03:36:40Z) - An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.
We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.
We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Matrix encoding method in variational quantum singular value decomposition [49.494595696663524]
Conditional measurement is involved to avoid small success probability in ancilla measurement.
The objective function for the algorithm can be obtained probabilistically via measurement of the state of a one-qubit subsystem.
arXiv Detail & Related papers (2025-03-19T07:01:38Z) - Preconditioned Block Encodings for Quantum Linear Systems [0.0]
Matrix preconditioning is a well-established classical technique to reduce $kappa$ by multiplying $A$ by a preconditioner $P$.
We consider four preconditioners and two encoding approaches for block encodings.
Their impact on subnormalisation factors and condition number $kappa$ are analysed using practical matrices from Computational Fluid Dynamics.
arXiv Detail & Related papers (2025-02-28T10:08:14Z) - Geometric structure and transversal logic of quantum Reed-Muller codes [51.11215560140181]
In this paper, we aim to characterize the gates of quantum Reed-Muller (RM) codes by exploiting the well-studied properties of their classical counterparts.
A set of stabilizer generators for a RM code can be described via $X$ and $Z$ operators acting on subcubes of particular dimensions.
arXiv Detail & Related papers (2024-10-10T04:07:24Z) - Linear Circuit Synthesis using Weighted Steiner Trees [45.11082946405984]
CNOT circuits are a common building block of general quantum circuits.
This article presents state-of-the-art algorithms for optimizing the number of CNOT gates.
A simulated evaluation shows that the suggested is almost always beneficial and reduces the number of CNOT gates by up to 10%.
arXiv Detail & Related papers (2024-08-07T19:51:22Z) - Quantum encoder for fixed Hamming-weight subspaces [0.0]
We present an exact $n$-qubit computational-basis amplitude encoder of real- or complex data vectors of $d=binomnk$valued into a subspace of fixed Hamming weight $k$.<n>We show how our encoder can improve the performance of variational quantum algorithms for problems that include particle-string symmetries.<n>Our results constitute a versatile framework for quantum data compression with various potential applications in fields such as quantum chemistry, quantum machine learning, and constrained $k$ optimizations.
arXiv Detail & Related papers (2024-05-30T18:26:41Z) - Quantum sampling algorithms for quantum state preparation and matrix block-encoding [0.0]
We first present an algorithm based on QRS that prepares a quantum state $|psi_frangle propto sumN_x=1 f(x)|xrangle$.
We then adapt QRS techniques to the matrix block-encoding problem and introduce a QRS-based algorithm for block-encoding a given matrix $A = sum_ij A_ij |irangle langle j|$.
arXiv Detail & Related papers (2024-05-19T03:46:11Z) - S-FABLE and LS-FABLE: Fast approximate block-encoding algorithms for
unstructured sparse matrices [0.0]
The Fast Approximate BLock-Lazy algorithm (FABLE) is a technique to block-encode arbitrary $Ntimes N$ dense matrices into quantum circuits.
We describe two modifications of FABLE to efficiently encode sparse matrices.
arXiv Detail & Related papers (2024-01-08T20:57:16Z) - Block encoding of matrix product operators [0.0]
We present a procedure to block-encode a Hamiltonian based on its matrix product operator (MPO) representation.
More specifically, we encode every MPO tensor in a larger unitary of dimension $D+2$, where $D = lceillog(chi)rceil$ is the number of subsequently contracted qubits that scales logarithmically with the virtual bond dimension.
arXiv Detail & Related papers (2023-12-14T12:34:24Z) - Circuit complexity of quantum access models for encoding classical data [4.727325187683489]
We study the Clifford$+T$ complexity of constructing some typical quantum access models.
We show that both sparse-access input models and block-encoding require nearly linear circuit complexities.
Our protocols are built upon improved quantum state preparation and a selective oracle for Pauli strings.
arXiv Detail & Related papers (2023-11-19T16:23:57Z) - Matrix Compression via Randomized Low Rank and Low Precision
Factorization [47.902465710511485]
Modern matrices can involve billions of elements, making their storage and processing quite demanding in terms of computational resources and memory usage.
We propose an algorithm that exploits this structure to obtain a low rank decomposition of any matrix $mathbfA$ as $mathbfLmathbfR$.
We empirically demonstrate the efficacy of our algorithm in image compression, nearest neighbor classification of image and text embeddings, and compressing the layers of LlaMa-$7$b.
arXiv Detail & Related papers (2023-10-17T06:56:57Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Spacetime-Efficient Low-Depth Quantum State Preparation with
Applications [93.56766264306764]
We show that a novel deterministic method for preparing arbitrary quantum states requires fewer quantum resources than previous methods.
We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations.
arXiv Detail & Related papers (2023-03-03T18:23:20Z) - Block-encoding structured matrices for data input in quantum computing [0.0]
We show how to construct block encoding circuits based on an arithmetic description of the sparsity and pattern of repeated values of a matrix.
The resulting circuits reduce flag qubit number according to sparsity, and data loading cost according to repeated values.
arXiv Detail & Related papers (2023-02-21T19:08:49Z) - Quantum Worst-Case to Average-Case Reductions for All Linear Problems [66.65497337069792]
We study the problem of designing worst-case to average-case reductions for quantum algorithms.
We provide an explicit and efficient transformation of quantum algorithms that are only correct on a small fraction of their inputs into ones that are correct on all inputs.
arXiv Detail & Related papers (2022-12-06T22:01:49Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - FABLE: Fast Approximate Quantum Circuits for Block-Encodings [0.0]
We propose FABLE, a method to generate approximate quantum circuits for block-encodings of matrices in a fast manner.
FABLE circuits have a simple structure and are directly formulated in terms of one- and two-qubit gates.
We show that FABLE circuits can be compressed and sparsified.
arXiv Detail & Related papers (2022-04-29T21:06:07Z) - VersaGNN: a Versatile accelerator for Graph neural networks [81.1667080640009]
We propose textitVersaGNN, an ultra-efficient, systolic-array-based versatile hardware accelerator.
textitVersaGNN achieves on average 3712$times$ speedup with 1301.25$times$ energy reduction on CPU, and 35.4$times$ speedup with 17.66$times$ energy reduction on GPU.
arXiv Detail & Related papers (2021-05-04T04:10:48Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Quantum Gram-Schmidt Processes and Their Application to Efficient State
Read-out for Quantum Algorithms [87.04438831673063]
We present an efficient read-out protocol that yields the classical vector form of the generated state.
Our protocol suits the case that the output state lies in the row space of the input matrix.
One of our technical tools is an efficient quantum algorithm for performing the Gram-Schmidt orthonormal procedure.
arXiv Detail & Related papers (2020-04-14T11:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.