Structural risk minimization for quantum linear classifiers
- URL: http://arxiv.org/abs/2105.05566v1
- Date: Wed, 12 May 2021 10:39:55 GMT
- Title: Structural risk minimization for quantum linear classifiers
- Authors: Casper Gyurik, Dyon van Vreumingen, and Vedran Dunjko
- Abstract summary: Quantum machine learning (QML) stands out as one of the typically highlighted candidates for quantum computing's near-term "killer application"
We investigate capacity measures of two closely related QML models called explicit and implicit quantum linear classifiers.
We identify that the rank and Frobenius norm of the observables used in the QML model closely control the model's capacity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum machine learning (QML) stands out as one of the typically highlighted
candidates for quantum computing's near-term "killer application". In this
context, QML models based on parameterized quantum circuits comprise a family
of machine learning models that are well suited for implementations on
near-term devices and that can potentially harness computational powers beyond
what is efficiently achievable on a classical computer. However, how to best
use these models -- e.g., how to control their expressivity to best balance
between training accuracy and generalization performance -- is far from
understood. In this paper we investigate capacity measures of two closely
related QML models called explicit and implicit quantum linear classifiers
(also called the quantum variational method and quantum kernel estimator) with
the objective of identifying new ways to implement structural risk minimization
-- i.e., how to balance between training accuracy and generalization
performance. In particular, we identify that the rank and Frobenius norm of the
observables used in the QML model closely control the model's capacity.
Additionally, we theoretically investigate the effect that these model
parameters have on the training accuracy of the QML model. Specifically, we
show that there exists datasets that require a high-rank observable for correct
classification, and that there exists datasets that can only be classified with
a given margin using an observable of at least a certain Frobenius norm. Our
results provide new options for performing structural risk minimization for QML
models.
Related papers
- Modeling Quantum Machine Learning for Genomic Data Analysis [12.248184406275405]
Quantum Machine Learning (QML) continues to evolve, unlocking new opportunities for diverse applications.
We investigate and evaluate the applicability of QML models for binary classification of genome sequence data by employing various feature mapping techniques.
We present an open-source, independent Qiskit-based implementation to conduct experiments on a benchmark genomic dataset.
arXiv Detail & Related papers (2025-01-14T15:14:26Z) - Learning to Measure Quantum Neural Networks [10.617463958884528]
We introduce a novel approach that makes the observable of the quantum system-specifically, the Hermitian matrix-learnable.
Our method features an end-to-end differentiable learning framework, where the parameterized observable is trained alongside the ordinary quantum circuit parameters.
Using numerical simulations, we show that the proposed method can identify observables for variational quantum circuits that lead to improved outcomes.
arXiv Detail & Related papers (2025-01-10T02:28:19Z) - Quantum Machine Learning in Log-based Anomaly Detection: Challenges and Opportunities [36.437593835024394]
We introduce a unified framework, ourframework, for evaluating QML models in the context of LogAD.
State-of-the-art methods such as DeepLog, LogAnomaly, and LogRobust are included in our framework.
Our evaluation extends to factors critical to QML performance, such as specificity, the number of circuits, circuit design, and quantum state encoding.
arXiv Detail & Related papers (2024-12-18T06:13:49Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Computable Model-Independent Bounds for Adversarial Quantum Machine Learning [4.857505043608425]
We introduce the first of an approximate lower bound for adversarial error when evaluating model resilience against quantum-based adversarial attacks.
In the best case, the experimental error is only 10% above the estimated bound, offering evidence of the inherent robustness of quantum models.
arXiv Detail & Related papers (2024-11-11T10:56:31Z) - Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms [65.268245109828]
We take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle.
We introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries.
arXiv Detail & Related papers (2023-10-26T18:23:21Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Predicting toxicity by quantum machine learning [11.696069523681178]
We develop QML models for predicting the toxicity of 221 phenols on the basis of quantitative structure activity relationship.
Results suggest that our data encoding enhanced by quantum entanglement provided more expressive power than the previous ones.
arXiv Detail & Related papers (2020-08-18T02:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.