Variational quantum regression algorithm with encoded data structure
- URL: http://arxiv.org/abs/2307.03334v3
- Date: Thu, 25 Jan 2024 01:01:33 GMT
- Title: Variational quantum regression algorithm with encoded data structure
- Authors: C.-C. Joseph Wang and Ryan S. Bennink
- Abstract summary: We construct a quantum regression algorithm wherein the quantum state directly encodes the classical data table.
We show for the first time explicitly how the linkage of the classical data structure can be taken advantage of directly through quantum subroutines.
- Score: 0.21756081703276003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hybrid variational quantum algorithms (VQAs) are promising for solving
practical problems such as combinatorial optimization, quantum chemistry
simulation, quantum machine learning, and quantum error correction on noisy
quantum computers. However, with typical random ansatz or quantum alternating
operator ansatz, derived variational quantum algorithms become a black box for
model interpretation. In this paper we construct a quantum regression algorithm
wherein the quantum state directly encodes the classical data table and the
variational parameters correspond directly to the regression coefficients which
are real numbers by construction, providing a high degree of model
interpretability and minimal cost to optimize with the right expressiveness.
Instead of assuming the state preparation is given by granted, we discuss the
state preparation with different encoders and their time complexity and overall
resource cost. We can take advantage of the encoded data structure to cut down
the algorithm time complexity. To the best of our knowledge, we show for the
first time explicitly how the linkage of the classical data structure can be
taken advantage of directly through quantum subroutines by construction. For
nonlinear regression, our algorithm can be extended by building nonlinear
features into the training data as demonstrated by numerical results. In
addition, we demonstrate that the model trainability is achievable only when
the number of features $M$ is much less than the number of records $L$ for the
encoded data structure to justify $L\gg M$ in our resource estimation.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Taming Quantum Time Complexity [45.867051459785976]
We show how to achieve both exactness and thriftiness in the setting of time complexity.
We employ a novel approach to the design of quantum algorithms based on what we call transducers.
arXiv Detail & Related papers (2023-11-27T14:45:19Z) - Predicting RNA Secondary Structure on Universal Quantum Computer [2.277461161767121]
It is the first step for understanding how RNA structure folds from base sequences that to know how its secondary structure is formed.
Traditional energy-based algorithms are short of precision, particularly for non-nested sequences.
Gate model algorithms for universal quantum computing are not available.
arXiv Detail & Related papers (2023-05-16T15:57:38Z) - Quantum Architecture Search for Quantum Monte Carlo Integration via
Conditional Parameterized Circuits with Application to Finance [0.0]
Classical Monte Carlo algorithms can theoretically be sped up on a quantum computer by employing amplitude estimation (AE)
We develop a straightforward approach based on pretraining parameterized quantum circuits.
We show how they can be transformed into their conditional variant, making them usable as a subroutine in an AE algorithm.
arXiv Detail & Related papers (2023-04-18T07:56:57Z) - Quantum Worst-Case to Average-Case Reductions for All Linear Problems [66.65497337069792]
We study the problem of designing worst-case to average-case reductions for quantum algorithms.
We provide an explicit and efficient transformation of quantum algorithms that are only correct on a small fraction of their inputs into ones that are correct on all inputs.
arXiv Detail & Related papers (2022-12-06T22:01:49Z) - Quantum Extremal Learning [0.8937790536664091]
We propose a quantum algorithm for extremal learning', which is the process of finding the input to a hidden function that extremizes the function output.
The algorithm, called quantum extremal learning (QEL), consists of a parametric quantum circuit that is variationally trained to model data input-output relationships.
arXiv Detail & Related papers (2022-05-05T17:37:26Z) - A Hybrid Quantum-Classical Algorithm for Robust Fitting [47.42391857319388]
We propose a hybrid quantum-classical algorithm for robust fitting.
Our core contribution is a novel robust fitting formulation that solves a sequence of integer programs.
We present results obtained using an actual quantum computer.
arXiv Detail & Related papers (2022-01-25T05:59:24Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum Ensemble for Classification [2.064612766965483]
A powerful way to improve performance in machine learning is to construct an ensemble that combines the predictions of multiple models.
We propose a new quantum algorithm that exploits quantum superposition, entanglement and interference to build an ensemble of classification models.
arXiv Detail & Related papers (2020-07-02T11:26:54Z) - Quantum Gram-Schmidt Processes and Their Application to Efficient State
Read-out for Quantum Algorithms [87.04438831673063]
We present an efficient read-out protocol that yields the classical vector form of the generated state.
Our protocol suits the case that the output state lies in the row space of the input matrix.
One of our technical tools is an efficient quantum algorithm for performing the Gram-Schmidt orthonormal procedure.
arXiv Detail & Related papers (2020-04-14T11:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.