On the relation between trainability and dequantization of variational quantum learning models
- URL: http://arxiv.org/abs/2406.07072v1
- Date: Tue, 11 Jun 2024 08:59:20 GMT
- Title: On the relation between trainability and dequantization of variational quantum learning models
- Authors: Elies Gil-Fuster, Casper Gyurik, Adrián Pérez-Salinas, Vedran Dunjko,
- Abstract summary: We study the relation between trainability and dequantization of variational quantum machine learning (QML)
We introduce recipes for building PQC-based QML models which are both trainable and nondequantizable.
Our work however does point toward a way forward for finding more general constructions, for which finding applications may become feasible.
- Score: 1.7999333451993955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quest for successful variational quantum machine learning (QML) relies on the design of suitable parametrized quantum circuits (PQCs), as analogues to neural networks in classical machine learning. Successful QML models must fulfill the properties of trainability and non-dequantization, among others. Recent works have highlighted an intricate interplay between trainability and dequantization of such models, which is still unresolved. In this work we contribute to this debate from the perspective of machine learning, proving a number of results identifying, among others when trainability and non-dequantization are not mutually exclusive. We begin by providing a number of new somewhat broader definitions of the relevant concepts, compared to what is found in other literature, which are operationally motivated, and consistent with prior art. With these precise definitions given and motivated, we then study the relation between trainability and dequantization of variational QML. Next, we also discuss the degrees of "variationalness" of QML models, where we distinguish between models like the hardware efficient ansatz and quantum kernel methods. Finally, we introduce recipes for building PQC-based QML models which are both trainable and nondequantizable, and corresponding to different degrees of variationalness. We do not address the practical utility for such models. Our work however does point toward a way forward for finding more general constructions, for which finding applications may become feasible.
Related papers
- Problem-informed Graphical Quantum Generative Learning [0.3914676152740143]
We propose a problem-informed quantum circuit Born machine Ansatz for learning the joint probability distribution of random variables.
We compare our model's performance to previous designs, showing it outperforms problem-agnostic circuits.
arXiv Detail & Related papers (2024-05-23T00:29:35Z) - Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms [65.268245109828]
We take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle.
We introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries.
arXiv Detail & Related papers (2023-10-26T18:23:21Z) - PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language
Models [52.09865918265002]
We propose a novel quantize before fine-tuning'' framework, PreQuant.
PreQuant is compatible with various quantization strategies, with outlier-aware fine-tuning incorporated to correct the induced quantization error.
We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5.
arXiv Detail & Related papers (2023-05-30T08:41:33Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - QNEAT: Natural Evolution of Variational Quantum Circuit Architecture [95.29334926638462]
We focus on variational quantum circuits (VQC), which emerged as the most promising candidates for the quantum counterpart of neural networks.
Although showing promising results, VQCs can be hard to train because of different issues, e.g., barren plateau, periodicity of the weights, or choice of architecture.
We propose a gradient-free algorithm inspired by natural evolution to optimize both the weights and the architecture of the VQC.
arXiv Detail & Related papers (2023-04-14T08:03:20Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Copula-based Risk Aggregation with Trapped Ion Quantum Computers [1.541403735141431]
Copulas are mathematical tools for modeling joint probability distributions.
Recent finding that copulas can be expressed as maximally entangled quantum states has revealed a promising approach to practical quantum advantages.
We study the training of QCBMs with different levels of precision and circuit design on a simulator and a state-of-the-art trapped ion quantum computer.
arXiv Detail & Related papers (2022-06-23T18:39:30Z) - Subtleties in the trainability of quantum machine learning models [0.0]
We show that gradient scaling results for Variational Quantum Algorithms can be applied to study the gradient scaling of Quantum Machine Learning models.
Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML.
arXiv Detail & Related papers (2021-10-27T20:28:53Z) - Predicting toxicity by quantum machine learning [11.696069523681178]
We develop QML models for predicting the toxicity of 221 phenols on the basis of quantitative structure activity relationship.
Results suggest that our data encoding enhanced by quantum entanglement provided more expressive power than the previous ones.
arXiv Detail & Related papers (2020-08-18T02:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.