Scalable quantum circuits for $n$-qubit unitary matrices
- URL: http://arxiv.org/abs/2304.14096v2
- Date: Mon, 15 Jan 2024 05:23:46 GMT
- Title: Scalable quantum circuits for $n$-qubit unitary matrices
- Authors: Rohit Sarma Sarkar, Bibhas Adhikari
- Abstract summary: This work presents an optimization-based scalable quantum neural network framework for approximating $n$-qubit unitaries through generic parametric representation of unitaries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work presents an optimization-based scalable quantum neural network
framework for approximating $n$-qubit unitaries through generic parametric
representation of unitaries, which are obtained as product of exponential of
basis elements of a new basis that we propose as an alternative to Pauli string
basis. We call this basis as the Standard Recursive Block Basis, which is
constructed using a recursive method, and its elements are permutation-similar
to block Hermitian unitary matrices.
Related papers
- Parseval Convolution Operators and Neural Networks [16.78532039510369]
We first identify the Parseval convolution operators as the class of energy-preserving filterbanks.
We then present a constructive approach for the design/specification of such filterbanks via the chaining of elementary Parseval modules.
We demonstrate the usage of those tools with the design of a CNN-based algorithm for the iterative reconstruction of biomedical images.
arXiv Detail & Related papers (2024-08-19T13:31:16Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - A quantum neural network framework for scalable quantum circuit approximation of unitary matrices [0.0]
We develop a quantum neural network framework for quantum circuit approximation of multi-qubit unitary gates.
Layers of the neural networks are defined by product of certain elements of the Standard Recursive Block Basis.
arXiv Detail & Related papers (2024-02-07T22:39:39Z) - Vectorization of the density matrix and quantum simulation of the von
Neumann equation of time-dependent Hamiltonians [65.268245109828]
We develop a general framework to linearize the von-Neumann equation rendering it in a suitable form for quantum simulations.
We show that one of these linearizations of the von-Neumann equation corresponds to the standard case in which the state vector becomes the column stacked elements of the density matrix.
A quantum algorithm to simulate the dynamics of the density matrix is proposed.
arXiv Detail & Related papers (2023-06-14T23:08:51Z) - An Alternative Formulation of the Quantum Phase Estimation Using Projection-Based Tensor Decompositions [0.0]
An alternative version of the quantum phase estimation is proposed, in which the Hadamard gates at the beginning are substituted by a quantum Fourier transform.
This new circuit coincides with the original one, when the ancilla is with $ket0$.
With the help of a projection-based tensor decomposition and closed-form expressions of its exponential, this new method can be interpreted as a multiplier coupled to the Hamiltonian of the corresponding target unitary operator.
arXiv Detail & Related papers (2023-03-10T13:02:29Z) - Efficient classical algorithms for simulating symmetric quantum systems [4.416367445587541]
We show that classical algorithms can efficiently emulate quantum counterparts given certain classical descriptions of the input.
Specifically, we give classical algorithms that calculate ground states and time-evolved expectation values for permutation-invariantians specified in the symmetrized Pauli basis.
arXiv Detail & Related papers (2022-11-30T13:53:16Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Preentangling Quantum Algorithms -- the Density Matrix Renormalization
Group-assisted Quantum Canonical Transformation [0.0]
We propose the use of parameter-free preentanglers as initial states for quantum algorithms.
We find this strategy to require significantly less parameters than corresponding generalized unitary coupled cluster circuits.
arXiv Detail & Related papers (2022-09-15T07:35:21Z) - Quantum algorithms for matrix operations and linear systems of equations [65.62256987706128]
We propose quantum algorithms for matrix operations using the "Sender-Receiver" model.
These quantum protocols can be used as subroutines in other quantum schemes.
arXiv Detail & Related papers (2022-02-10T08:12:20Z) - A Parallelizable Lattice Rescoring Strategy with Neural Language Models [62.20538383769179]
A posterior-based lattice expansion algorithm is proposed for efficient lattice rescoring with neural language models (LMs) for automatic speech recognition.
Experiments on the Switchboard dataset show that the proposed rescoring strategy obtains comparable recognition performance.
The parallel rescoring method offers more flexibility by simplifying the integration of PyTorch-trained neural LMs for lattice rescoring with Kaldi.
arXiv Detail & Related papers (2021-03-08T21:23:12Z) - Supervised Quantile Normalization for Low-rank Matrix Approximation [50.445371939523305]
We learn the parameters of quantile normalization operators that can operate row-wise on the values of $X$ and/or of its factorization $UV$ to improve the quality of the low-rank representation of $X$ itself.
We demonstrate the applicability of these techniques on synthetic and genomics datasets.
arXiv Detail & Related papers (2020-02-08T21:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.