A kernel-based quantum random forest for improved classification
- URL: http://arxiv.org/abs/2210.02355v1
- Date: Wed, 5 Oct 2022 15:57:31 GMT
- Title: A kernel-based quantum random forest for improved classification
- Authors: Maiyuren Srikumar, Charles D. Hill and Lloyd C.L. Hollenberg
- Abstract summary: Quantum Machine Learning (QML) to enhance traditional classical learning methods has seen various limitations to its realisation.
We extend the linear quantum support vector machine (QSVM) with kernel function computed through quantum kernel estimation (QKE)
To limit overfitting, we further extend the model to employ a low-rank Nystr"om approximation to the kernel matrix.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of Quantum Machine Learning (QML) to enhance traditional
classical learning methods has seen various limitations to its realisation.
There is therefore an imperative to develop quantum models with unique model
hypotheses to attain expressional and computational advantage. In this work we
extend the linear quantum support vector machine (QSVM) with kernel function
computed through quantum kernel estimation (QKE), to form a decision tree
classifier constructed from a decision directed acyclic graph of QSVM nodes -
the ensemble of which we term the quantum random forest (QRF). To limit
overfitting, we further extend the model to employ a low-rank Nystr\"{o}m
approximation to the kernel matrix. We provide generalisation error bounds on
the model and theoretical guarantees to limit errors due to finite sampling on
the Nystr\"{o}m-QKE strategy. In doing so, we show that we can achieve lower
sampling complexity when compared to QKE. We numerically illustrate the effect
of varying model hyperparameters and finally demonstrate that the QRF is able
obtain superior performance over QSVMs, while also requiring fewer kernel
estimations.
Related papers
- Extending Quantum Perceptrons: Rydberg Devices, Multi-Class Classification, and Error Tolerance [67.77677387243135]
Quantum Neuromorphic Computing (QNC) merges quantum computation with neural computation to create scalable, noise-resilient algorithms for quantum machine learning (QML)
At the core of QNC is the quantum perceptron (QP), which leverages the analog dynamics of interacting qubits to enable universal quantum computation.
arXiv Detail & Related papers (2024-11-13T23:56:20Z) - Projective Quantum Eigensolver with Generalized Operators [0.0]
We develop a methodology for determining the generalized operators in terms of a closed form residual equations in the PQE framework.
With the application on several molecular systems, we have demonstrated our ansatz achieves similar accuracy to the (disentangled) UCC with singles, doubles and triples.
arXiv Detail & Related papers (2024-10-21T15:40:22Z) - Explicit quantum surrogates for quantum kernel models [0.6834295298053009]
We propose a quantum-classical hybrid algorithm to create an explicit quantum surrogate (EQS) for trained implicit models.
This involves diagonalizing an observable from the implicit model and constructing a corresponding quantum circuit.
The EQS framework reduces prediction costs, mitigates barren plateau issues, and combines the strengths of both QML approaches.
arXiv Detail & Related papers (2024-08-06T07:15:45Z) - Unleashing the Expressive Power of Pulse-Based Quantum Neural Networks [0.46085106405479537]
Quantum machine learning (QML) based on Noisy Intermediate-Scale Quantum (NISQ) devices hinges on the optimal utilization of limited quantum resources.
gate-based QML models are user-friendly for software engineers.
pulse-based models enable the construction of "infinitely" deep quantum neural networks within the same time.
arXiv Detail & Related papers (2024-02-05T10:47:46Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Error mitigation and quantum-assisted simulation in the error corrected
regime [77.34726150561087]
A standard approach to quantum computing is based on the idea of promoting a classically simulable and fault-tolerant set of operations.
We show how the addition of noisy magic resources allows one to boost classical quasiprobability simulations of a quantum circuit.
arXiv Detail & Related papers (2021-03-12T20:58:41Z) - Sampling Overhead Analysis of Quantum Error Mitigation: Uncoded vs.
Coded Systems [69.33243249411113]
We show that Pauli errors incur the lowest sampling overhead among a large class of realistic quantum channels.
We conceive a scheme amalgamating QEM with quantum channel coding, and analyse its sampling overhead reduction compared to pure QEM.
arXiv Detail & Related papers (2020-12-15T15:51:27Z) - Practical application improvement to Quantum SVM: theory to practice [0.9449650062296824]
We use quantum feature maps to translate data into quantum states and build the SVM kernel out of these quantum states.
We show in experiments that this allows QSVM to perform equally to SVM regardless of the complexity of the data sets.
arXiv Detail & Related papers (2020-12-14T17:19:17Z) - Chaos and Complexity from Quantum Neural Network: A study with Diffusion
Metric in Machine Learning [0.0]
We study the phenomena of quantum chaos and complexity in the machine learning dynamics of Quantum Neural Network (QNN)
We employ a statistical and differential geometric approach to study the learning theory of QNN.
arXiv Detail & Related papers (2020-11-16T10:41:47Z) - Momentum Q-learning with Finite-Sample Convergence Guarantee [49.38471009162477]
This paper analyzes a class of momentum-based Q-learning algorithms with finite-sample guarantee.
We establish the convergence guarantee for MomentumQ with linear function approximations and Markovian sampling.
We demonstrate through various experiments that the proposed MomentumQ outperforms other momentum-based Q-learning algorithms.
arXiv Detail & Related papers (2020-07-30T12:27:03Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.