Structural risk minimization for quantum linear classifiers
- URL: http://arxiv.org/abs/2105.05566v1
- Date: Wed, 12 May 2021 10:39:55 GMT
- Title: Structural risk minimization for quantum linear classifiers
- Authors: Casper Gyurik, Dyon van Vreumingen, and Vedran Dunjko
- Abstract summary: Quantum machine learning (QML) stands out as one of the typically highlighted candidates for quantum computing's near-term "killer application"
We investigate capacity measures of two closely related QML models called explicit and implicit quantum linear classifiers.
We identify that the rank and Frobenius norm of the observables used in the QML model closely control the model's capacity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum machine learning (QML) stands out as one of the typically highlighted
candidates for quantum computing's near-term "killer application". In this
context, QML models based on parameterized quantum circuits comprise a family
of machine learning models that are well suited for implementations on
near-term devices and that can potentially harness computational powers beyond
what is efficiently achievable on a classical computer. However, how to best
use these models -- e.g., how to control their expressivity to best balance
between training accuracy and generalization performance -- is far from
understood. In this paper we investigate capacity measures of two closely
related QML models called explicit and implicit quantum linear classifiers
(also called the quantum variational method and quantum kernel estimator) with
the objective of identifying new ways to implement structural risk minimization
-- i.e., how to balance between training accuracy and generalization
performance. In particular, we identify that the rank and Frobenius norm of the
observables used in the QML model closely control the model's capacity.
Additionally, we theoretically investigate the effect that these model
parameters have on the training accuracy of the QML model. Specifically, we
show that there exists datasets that require a high-rank observable for correct
classification, and that there exists datasets that can only be classified with
a given margin using an observable of at least a certain Frobenius norm. Our
results provide new options for performing structural risk minimization for QML
models.
Related papers
- Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Computable Model-Independent Bounds for Adversarial Quantum Machine Learning [4.857505043608425]
We introduce the first of an approximate lower bound for adversarial error when evaluating model resilience against quantum-based adversarial attacks.
In the best case, the experimental error is only 10% above the estimated bound, offering evidence of the inherent robustness of quantum models.
arXiv Detail & Related papers (2024-11-11T10:56:31Z) - GWQ: Gradient-Aware Weight Quantization for Large Language Models [61.17678373122165]
gradient-aware weight quantization (GWQ) is the first quantization approach for low-bit weight quantization that leverages gradients to localize outliers.
GWQ retains the corresponding to the top 1% outliers preferentially at FP16 precision, while the remaining non-outlier weights are stored in a low-bit format.
In the zero-shot task, GWQ quantized models have higher accuracy compared to other quantization methods.
arXiv Detail & Related papers (2024-10-30T11:16:04Z) - Discrete Randomized Smoothing Meets Quantum Computing [40.54768963869454]
We show how to encode all the perturbations of the input binary data in superposition and use Quantum Amplitude Estimation (QAE) to obtain a quadratic reduction in the number of calls to the model.
In addition, we propose a new binary threat model to allow for an extensive evaluation of our approach on images, graphs, and text.
arXiv Detail & Related papers (2024-08-01T20:21:52Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - Dequantizing quantum machine learning models using tensor networks [0.0]
We introduce the dequantizability of the function class of variational quantum-machine-learning(VQML) models.
We show that our formalism can properly distinguish VQML models according to their genuine quantum characteristics.
arXiv Detail & Related papers (2023-07-13T17:56:20Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Reflection Equivariant Quantum Neural Networks for Enhanced Image
Classification [0.7232471205719458]
We build new machine learning models which explicitly respect the symmetries inherent in their data, so-called geometric quantum machine learning (GQML)
We find that these networks are capable of consistently and significantly outperforming generic ansatze on complicated real-world image datasets.
arXiv Detail & Related papers (2022-12-01T04:10:26Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Predicting toxicity by quantum machine learning [11.696069523681178]
We develop QML models for predicting the toxicity of 221 phenols on the basis of quantitative structure activity relationship.
Results suggest that our data encoding enhanced by quantum entanglement provided more expressive power than the previous ones.
arXiv Detail & Related papers (2020-08-18T02:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.