Generalization in Quantum Machine Learning: a Quantum Information
Perspective
- URL: http://arxiv.org/abs/2102.08991v1
- Date: Wed, 17 Feb 2021 19:35:21 GMT
- Title: Generalization in Quantum Machine Learning: a Quantum Information
Perspective
- Authors: Leonardo Banchi, Jason Pereira, Stefano Pirandola
- Abstract summary: We show how different properties of $Q$ affect classification accuracy and generalization.
We introduce a quantum version of the Information Bottleneck principle that allows us to explore the various tradeoffs between accuracy and generalization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the machine learning problem of generalization when quantum
operations are used to classify either classical data or quantum channels,
where in both cases the task is to learn from data how to assign a certain
class $c$ to inputs $x$ via measurements on a quantum state $\rho(x)$. A
trained quantum model generalizes when it is able to predict the correct class
for previously unseen data. We show that the accuracy and generalization
capability of quantum classifiers depend on the (R\'enyi) mutual informations
$I(C{:}Q)$ and $I_2(X{:}Q)$ between the quantum embedding $Q$ and the classical
input space $X$ or class space $C$. Based on the above characterization, we
then show how different properties of $Q$ affect classification accuracy and
generalization, such as the dimension of the Hilbert space, the amount of
noise, and the amount of neglected information via, e.g., pooling layers.
Moreover, we introduce a quantum version of the Information Bottleneck
principle that allows us to explore the various tradeoffs between accuracy and
generalization.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Information-theoretic generalization bounds for learning from quantum data [5.0739329301140845]
We propose a general mathematical formalism for describing quantum learning by training on classical-quantum data.
We prove bounds on the expected generalization error of a quantum learner in terms of classical and quantum information-theoretic quantities.
Our work lays a foundation for a unifying quantum information-theoretic perspective on quantum learning.
arXiv Detail & Related papers (2023-11-09T17:21:38Z) - Maximal Information Leakage from Quantum Encoding of Classical Data [9.244521717083696]
An adversary can access a single copy of the state of a quantum system that encodes some classical data.
The resulting measure of information leakage is the multiplicative increase of the probability of correctly guessing any function of the classical data.
arXiv Detail & Related papers (2023-07-24T05:16:02Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Generalization in quantum machine learning from few training data [4.325561431427748]
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set.
We show that the generalization error of a quantum machine learning model with $T$ trainable gates at worst as $sqrtT/N$.
We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set.
arXiv Detail & Related papers (2021-11-09T17:49:46Z) - Information-theoretic bounds on quantum advantage in machine learning [6.488575826304023]
We study the performance of classical and quantum machine learning (ML) models in predicting outcomes of physical experiments.
For any input distribution $mathcalD(x)$, a classical ML model can provide accurate predictions on average by accessing $mathcalE$ a number of times comparable to the optimal quantum ML model.
arXiv Detail & Related papers (2021-01-07T10:10:09Z) - Quantum information spreading in a disordered quantum walk [50.591267188664666]
We design a quantum probing protocol using Quantum Walks to investigate the Quantum Information spreading pattern.
We focus on the coherent static and dynamic disorder to investigate anomalous and classical transport.
Our results show that a Quantum Walk can be considered as a readout device of information about defects and perturbations occurring in complex networks.
arXiv Detail & Related papers (2020-10-20T20:03:19Z) - Quantum Gram-Schmidt Processes and Their Application to Efficient State
Read-out for Quantum Algorithms [87.04438831673063]
We present an efficient read-out protocol that yields the classical vector form of the generated state.
Our protocol suits the case that the output state lies in the row space of the input matrix.
One of our technical tools is an efficient quantum algorithm for performing the Gram-Schmidt orthonormal procedure.
arXiv Detail & Related papers (2020-04-14T11:05:26Z) - Quantum embeddings for machine learning [5.16230883032882]
Quantum classifiers are trainable quantum circuits used as machine learning models.
We propose to train the first part of the circuit -- the embedding -- with the objective of maximally separating data classes in Hilbert space.
This approach provides a powerful analytic framework for quantum machine learning.
arXiv Detail & Related papers (2020-01-10T19:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.