Matrix manipulations via unitary transformations and ancilla-state
measurements
- URL: http://arxiv.org/abs/2311.11329v1
- Date: Sun, 19 Nov 2023 14:06:25 GMT
- Title: Matrix manipulations via unitary transformations and ancilla-state
measurements
- Authors: Alexander I. Zenchuk, Wentao Qi, Asutosh Kumar, Junde Wu
- Abstract summary: We propose protocols for calculating inner product, matrix addition and matrix multiplication based on multiqubit Toffoli-type and the simplest one-qubit operations.
The depth (runtime) of the addition protocol is $O(1)$ and that of other protocols logarithmically increases with the dimensionality of the considered matrices.
- Score: 49.494595696663524
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose protocols for calculating inner product, matrix addition and
matrix multiplication based on multiqubit Toffoli-type and the simplest
one-qubit operations and employ ancilla measurements to remove all garbage of
calculations. The depth (runtime) of the addition protocol is $O(1)$ and that
of other protocols logarithmically increases with the dimensionality of the
considered matrices.
Related papers
- Optimal Quantization for Matrix Multiplication [35.007966885532724]
We present a universal quantizer based on nested lattices with an explicit guarantee of approximation error for any (non-random) pair of matrices $A$, $B$ in terms of only Frobenius norms $|A|_F, |B|_F$ and $|Atop B|_F$.
arXiv Detail & Related papers (2024-10-17T17:19:48Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - A Singular Woodbury and Pseudo-Determinant Matrix Identities and
Application to Gaussian Process Regression [1.5002438468152661]
We study a matrix that arises from a singular form of the Woodbury matrix identity.
We present generalized inverse and pseudo-determinant identities for this matrix.
We extend the definition of the precision matrix to the Bott-Duffin inverse of the covariance matrix.
arXiv Detail & Related papers (2022-07-16T23:45:27Z) - Quantum algorithms for matrix operations and linear systems of equations [65.62256987706128]
We propose quantum algorithms for matrix operations using the "Sender-Receiver" model.
These quantum protocols can be used as subroutines in other quantum schemes.
arXiv Detail & Related papers (2022-02-10T08:12:20Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Calculating elements of matrix functions using divided differences [0.3437656066916039]
We introduce a method for calculating individual elements of matrix functions.
We showcase our approach by calculating the matrix elements of the exponential of a transverse-field Ising model.
We discuss practical applications of our method.
arXiv Detail & Related papers (2021-07-29T15:53:11Z) - Meta-learning for Matrix Factorization without Shared Rows or Columns [39.56814839510978]
The proposed method uses a neural network that takes a matrix as input, and generates prior distributions of factorized matrices of the given matrix.
The neural network is meta-learned such that the expected imputation error is minimized.
In our experiments with three user-item rating datasets, we demonstrate that our proposed method can impute the missing values from a limited number of observations in unseen matrices.
arXiv Detail & Related papers (2021-06-29T07:40:20Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z) - Tangent-space methods for truncating uniform MPS [0.0]
A central primitive in quantum tensor network simulations is the problem of approximating a matrix product state with one of a lower bond dimension.
We formulate a tangent-space based variational algorithm to achieve this for uniform (infinite) matrix product states.
arXiv Detail & Related papers (2020-01-31T14:54:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.