Arbitrary Polynomial Separations in Trainable Quantum Machine Learning
- URL: http://arxiv.org/abs/2402.08606v3
- Date: Mon, 23 Dec 2024 17:38:51 GMT
- Title: Arbitrary Polynomial Separations in Trainable Quantum Machine Learning
- Authors: Eric R. Anschuetz, Xun Gao,
- Abstract summary: Recent theoretical results in quantum machine learning have demonstrated a general trade-off between the expressive power of quantum neural networks (QNNs) and their trainability.
We show that contextuality is the source of the expressivity separation, suggesting that other learning tasks with this property may be a natural setting for the use of quantum learning algorithms.
- Score: 0.8532753451809455
- License:
- Abstract: Recent theoretical results in quantum machine learning have demonstrated a general trade-off between the expressive power of quantum neural networks (QNNs) and their trainability; as a corollary of these results, practical exponential separations in expressive power over classical machine learning models are believed to be infeasible as such QNNs take a time to train that is exponential in the model size. We here circumvent these negative results by constructing a hierarchy of efficiently trainable QNNs that exhibit unconditionally provable, polynomial memory separations of arbitrary constant degree over classical neural networks -- including state-of-the-art models, such as Transformers -- in performing a classical sequence modeling task. This construction is also computationally efficient, as each unit cell of the introduced class of QNNs only has constant gate complexity. We show that contextuality -- informally, a quantitative notion of semantic ambiguity -- is the source of the expressivity separation, suggesting that other learning tasks with this property may be a natural setting for the use of quantum learning algorithms.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Dissipative learning of a quantum classifier [0.0]
We analyze the learning dynamics of a quantum classifier model that works as an open quantum system.
The model can be successfully trained with a gradient descent (GD) based algorithm.
The fact that these optimization processes have been obtained with continuous dynamics, shows promise for the development of a differentiable activation function.
arXiv Detail & Related papers (2023-07-23T11:08:51Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
Deep Quantum Machine Learning [52.77024349608834]
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing.
Key issue is how to address the inherent non-linearity of classical deep learning.
We introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning.
arXiv Detail & Related papers (2022-12-22T16:06:24Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Interpretable Quantum Advantage in Neural Sequence Learning [2.575030923243061]
We study the relative expressive power between a broad class of neural network sequence models and a class of recurrent models based on Gaussian operations with non-Gaussian measurements.
We pinpoint quantum contextuality as the source of an unconditional memory separation in the expressivity of the two model classes.
In doing so, we demonstrate that our introduced quantum models are able to outperform state of the art classical models even in practice.
arXiv Detail & Related papers (2022-09-28T18:34:04Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z) - Eigen component analysis: A quantum theory incorporated machine learning
technique to find linearly maximum separable components [0.0]
In quantum mechanics, a state is the superposition of multiple eigenstates.
We propose eigen component analysis (ECA), an interpretable linear learning model.
ECA incorporates the principle of quantum mechanics into the design of algorithm design for feature extraction, classification, dictionary and deep learning.
arXiv Detail & Related papers (2020-03-23T12:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.