QUBO Formulations for Training Machine Learning Models
- URL: http://arxiv.org/abs/2008.02369v1
- Date: Wed, 5 Aug 2020 21:16:05 GMT
- Title: QUBO Formulations for Training Machine Learning Models
- Authors: Prasanna Date, Davis Arthur, Lauren Pusey-Nazzaro
- Abstract summary: We leverage non-conventional computing paradigms like quantum computing to train machine learning models efficiently.
We formulate the training problems of three machine learning models---linear regression, support vector machine (SVM) and equal-sized k-means clustering---as QUBO problems so that they can be trained on adiabatic quantum computers efficiently.
We show that the time and space complexities of our formulations are better (in the case of SVM and equal-sized k-means clustering) or equivalent (in case of linear regression) to their classical counterparts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training machine learning models on classical computers is usually a time and
compute intensive process. With Moore's law coming to an end and ever
increasing demand for large-scale data analysis using machine learning, we must
leverage non-conventional computing paradigms like quantum computing to train
machine learning models efficiently. Adiabatic quantum computers like the
D-Wave 2000Q can approximately solve NP-hard optimization problems, such as the
quadratic unconstrained binary optimization (QUBO), faster than classical
computers. Since many machine learning problems are also NP-hard, we believe
adiabatic quantum computers might be instrumental in training machine learning
models efficiently in the post Moore's law era. In order to solve a problem on
adiabatic quantum computers, it must be formulated as a QUBO problem, which is
a challenging task in itself. In this paper, we formulate the training problems
of three machine learning models---linear regression, support vector machine
(SVM) and equal-sized k-means clustering---as QUBO problems so that they can be
trained on adiabatic quantum computers efficiently. We also analyze the time
and space complexities of our formulations and compare them to the
state-of-the-art classical algorithms for training these machine learning
models. We show that the time and space complexities of our formulations are
better (in the case of SVM and equal-sized k-means clustering) or equivalent
(in case of linear regression) to their classical counterparts.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Hybrid Heuristic Algorithms for Adiabatic Quantum Machine Learning Models [2.7407913606612615]
This paper presents a hybrid embedding an r-flip strategy to solve large-scale QUBO with an improved solution and shorter computing time.
The r-flip strategy embedded algorithm provides very high-quality solutions within the CPU time limits of 60 and 600 seconds.
arXiv Detail & Related papers (2024-07-26T19:31:58Z) - Quantum-Assisted Simulation: A Framework for Developing Machine Learning Models in Quantum Computing [0.0]
We investigate the history of quantum computing, examine existing QML algorithms, and present a simplified procedure for setting up simulations of QML algorithms.
We conduct simulations on a dataset using both traditional machine learning and quantum machine learning approaches.
arXiv Detail & Related papers (2023-11-17T07:33:42Z) - Towards provably efficient quantum algorithms for large-scale
machine-learning models [11.440134080370811]
We show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms.
We benchmark instances of large machine learning models from 7 million to 103 million parameters.
arXiv Detail & Related papers (2023-03-06T19:00:27Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Quantum Semi-Supervised Kernel Learning [4.726777092009554]
We present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines.
We show that it maintains the same speedup as the fully-supervised Quantum LS-SVM.
arXiv Detail & Related papers (2022-04-22T13:39:55Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - Oracle separations of hybrid quantum-classical circuits [68.96380145211093]
Two models of quantum computation: CQ_d and QC_d.
CQ_d captures the scenario of a d-depth quantum computer many times; QC_d is more analogous to measurement-based quantum computation.
We show that, despite the similarities between CQ_d and QC_d, the two models are intrinsically, i.e. CQ_d $nsubseteq$ QC_d and QC_d $nsubseteq$ CQ_d relative to an oracle.
arXiv Detail & Related papers (2022-01-06T03:10:53Z) - Fixed Depth Hamiltonian Simulation via Cartan Decomposition [59.20417091220753]
We present a constructive algorithm for generating quantum circuits with time-independent depth.
We highlight our algorithm for special classes of models, including Anderson localization in one dimensional transverse field XY model.
In addition to providing exact circuits for a broad set of spin and fermionic models, our algorithm provides broad analytic and numerical insight into optimal Hamiltonian simulations.
arXiv Detail & Related papers (2021-04-01T19:06:00Z) - Data-Efficient Learning for Complex and Real-Time Physical Problem
Solving using Augmented Simulation [49.631034790080406]
We present a task for navigating a marble to the center of a circular maze.
We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system.
arXiv Detail & Related papers (2020-11-14T02:03:08Z) - Adiabatic Quantum Linear Regression [0.0]
We present an adiabatic quantum computing approach for training a linear regression model.
Our analysis shows that the quantum approach attains up to 2.8x speedup over the classical approach on larger datasets.
arXiv Detail & Related papers (2020-08-05T20:40:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.