Adiabatic Quantum Linear Regression
- URL: http://arxiv.org/abs/2008.02355v1
- Date: Wed, 5 Aug 2020 20:40:41 GMT
- Title: Adiabatic Quantum Linear Regression
- Authors: Prasanna Date, Thomas Potok
- Abstract summary: We present an adiabatic quantum computing approach for training a linear regression model.
Our analysis shows that the quantum approach attains up to 2.8x speedup over the classical approach on larger datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major challenge in machine learning is the computational expense of
training these models. Model training can be viewed as a form of optimization
used to fit a machine learning model to a set of data, which can take up
significant amount of time on classical computers. Adiabatic quantum computers
have been shown to excel at solving optimization problems, and therefore, we
believe, present a promising alternative to improve machine learning training
times. In this paper, we present an adiabatic quantum computing approach for
training a linear regression model. In order to do this, we formulate the
regression problem as a quadratic unconstrained binary optimization (QUBO)
problem. We analyze our quantum approach theoretically, test it on the D-Wave
2000Q adiabatic quantum computer and compare its performance to a classical
approach that uses the Scikit-learn library in Python. Our analysis shows that
the quantum approach attains up to 2.8x speedup over the classical approach on
larger datasets, and performs at par with the classical approach on the
regression error metric.
Related papers
- Memory-Augmented Quantum Reservoir Computing [0.0]
We present a hybrid quantum-classical approach that implements memory through classical post-processing of quantum measurements.
We tested our model on two physical platforms: a fully connected Ising model and a Rydberg atom array.
arXiv Detail & Related papers (2024-09-15T22:44:09Z) - Adaptive Learning for Quantum Linear Regression [10.445957451908695]
In a recent work, linear regression was formulated as a quadratic binary optimization problem.
This approach promises a computational time advantage for large datasets.
However, the quality of the solution is limited by the necessary use of a precision vector.
In this work, we focus on the practical challenge of improving the precision vector encoding.
arXiv Detail & Related papers (2024-08-05T21:09:01Z) - Adiabatic Quantum Support Vector Machines [0.8445084028034932]
We describe an adiabatic quantum approach for training support vector machines.
We show that the time complexity of our quantum approach is an order of magnitude better than the classical approach.
arXiv Detail & Related papers (2024-01-23T04:50:13Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Toward Theoretical Guidance for Two Common Questions in Practical
Cross-Validation based Hyperparameter Selection [72.76113104079678]
We show the first theoretical treatments of two common questions in cross-validation based hyperparameter selection.
We show that these generalizations can, respectively, always perform at least as well as always performing retraining or never performing retraining.
arXiv Detail & Related papers (2023-01-12T16:37:12Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - QBoost for regression problems: solving partial differential equations [0.0]
The hybrid algorithm is capable of finding a solution to a partial differential equation with good precision and favorable scaling in the required number of qubits.
The classical part is composed by training several regressors, capable of solving a partial differential equation using machine learning.
The quantum part consists of adapting the QBoost algorithm to solve regression problems.
arXiv Detail & Related papers (2021-08-30T16:13:04Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Machine Learning Framework for Quantum Sampling of Highly-Constrained,
Continuous Optimization Problems [101.18253437732933]
We develop a generic, machine learning-based framework for mapping continuous-space inverse design problems into surrogate unconstrained binary optimization problems.
We showcase the framework's performance on two inverse design problems by optimizing thermal emitter topologies for thermophotovoltaic applications and (ii) diffractive meta-gratings for highly efficient beam steering.
arXiv Detail & Related papers (2021-05-06T02:22:23Z) - QUBO Formulations for Training Machine Learning Models [0.0]
We leverage non-conventional computing paradigms like quantum computing to train machine learning models efficiently.
We formulate the training problems of three machine learning models---linear regression, support vector machine (SVM) and equal-sized k-means clustering---as QUBO problems so that they can be trained on adiabatic quantum computers efficiently.
We show that the time and space complexities of our formulations are better (in the case of SVM and equal-sized k-means clustering) or equivalent (in case of linear regression) to their classical counterparts.
arXiv Detail & Related papers (2020-08-05T21:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.