QUBO Formulations for Training Machine Learning Models
- URL: http://arxiv.org/abs/2008.02369v1
- Date: Wed, 5 Aug 2020 21:16:05 GMT
- Title: QUBO Formulations for Training Machine Learning Models
- Authors: Prasanna Date, Davis Arthur, Lauren Pusey-Nazzaro
- Abstract summary: We leverage non-conventional computing paradigms like quantum computing to train machine learning models efficiently.
We formulate the training problems of three machine learning models---linear regression, support vector machine (SVM) and equal-sized k-means clustering---as QUBO problems so that they can be trained on adiabatic quantum computers efficiently.
We show that the time and space complexities of our formulations are better (in the case of SVM and equal-sized k-means clustering) or equivalent (in case of linear regression) to their classical counterparts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training machine learning models on classical computers is usually a time and
compute intensive process. With Moore's law coming to an end and ever
increasing demand for large-scale data analysis using machine learning, we must
leverage non-conventional computing paradigms like quantum computing to train
machine learning models efficiently. Adiabatic quantum computers like the
D-Wave 2000Q can approximately solve NP-hard optimization problems, such as the
quadratic unconstrained binary optimization (QUBO), faster than classical
computers. Since many machine learning problems are also NP-hard, we believe
adiabatic quantum computers might be instrumental in training machine learning
models efficiently in the post Moore's law era. In order to solve a problem on
adiabatic quantum computers, it must be formulated as a QUBO problem, which is
a challenging task in itself. In this paper, we formulate the training problems
of three machine learning models---linear regression, support vector machine
(SVM) and equal-sized k-means clustering---as QUBO problems so that they can be
trained on adiabatic quantum computers efficiently. We also analyze the time
and space complexities of our formulations and compare them to the
state-of-the-art classical algorithms for training these machine learning
models. We show that the time and space complexities of our formulations are
better (in the case of SVM and equal-sized k-means clustering) or equivalent
(in case of linear regression) to their classical counterparts.
Related papers
- Quantum-Assisted Simulation: A Framework for Designing Machine Learning
Models in the Quantum Computing Domain [0.0]
We explore the history of quantum computing, examine existing QML algorithms, and aim to present a simplified procedure for setting up simulations of QML algorithms.
We conducted simulations on a dataset using both machine learning and quantum machine learning approaches.
arXiv Detail & Related papers (2023-11-17T07:33:42Z) - Towards provably efficient quantum algorithms for large-scale
machine-learning models [11.440134080370811]
We show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms.
We benchmark instances of large machine learning models from 7 million to 103 million parameters.
arXiv Detail & Related papers (2023-03-06T19:00:27Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - On the Super-exponential Quantum Speedup of Equivariant Quantum Machine
Learning Algorithms with SU($d$) Symmetry [14.281289319738633]
We enhance a natural model of quantum computation--permutational quantum computing (PQC)--and defines a more powerful model: PQC+.
While PQC was shown to be effectively classically simulatable, we exhibit a problem which can be efficiently solved on PQC+ machine.
We discuss practical quantum machine learning algorithms which can be carried out in the paradigm of PQC+.
arXiv Detail & Related papers (2022-07-15T01:41:53Z) - Quantum Semi-Supervised Kernel Learning [4.726777092009554]
We present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines.
We show that it maintains the same speedup as the fully-supervised Quantum LS-SVM.
arXiv Detail & Related papers (2022-04-22T13:39:55Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - Oracle separations of hybrid quantum-classical circuits [68.96380145211093]
Two models of quantum computation: CQ_d and QC_d.
CQ_d captures the scenario of a d-depth quantum computer many times; QC_d is more analogous to measurement-based quantum computation.
We show that, despite the similarities between CQ_d and QC_d, the two models are intrinsically, i.e. CQ_d $nsubseteq$ QC_d and QC_d $nsubseteq$ CQ_d relative to an oracle.
arXiv Detail & Related papers (2022-01-06T03:10:53Z) - Fixed Depth Hamiltonian Simulation via Cartan Decomposition [59.20417091220753]
We present a constructive algorithm for generating quantum circuits with time-independent depth.
We highlight our algorithm for special classes of models, including Anderson localization in one dimensional transverse field XY model.
In addition to providing exact circuits for a broad set of spin and fermionic models, our algorithm provides broad analytic and numerical insight into optimal Hamiltonian simulations.
arXiv Detail & Related papers (2021-04-01T19:06:00Z) - Data-Efficient Learning for Complex and Real-Time Physical Problem
Solving using Augmented Simulation [49.631034790080406]
We present a task for navigating a marble to the center of a circular maze.
We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system.
arXiv Detail & Related papers (2020-11-14T02:03:08Z) - Adiabatic Quantum Linear Regression [0.0]
We present an adiabatic quantum computing approach for training a linear regression model.
Our analysis shows that the quantum approach attains up to 2.8x speedup over the classical approach on larger datasets.
arXiv Detail & Related papers (2020-08-05T20:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.